Test Report: Docker_macOS 18222

                    
                      364dec8bbfa467ece5e4dc002f47e6311a48ec7e:2024-02-26:33307
                    
                

Test fail (11/333)

x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (279.15s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-138000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0226 02:42:59.877014   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
E0226 02:43:27.564326   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
E0226 02:43:32.475733   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 02:43:32.480838   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 02:43:32.491276   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 02:43:32.511354   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 02:43:32.551530   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 02:43:32.631851   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 02:43:32.791953   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 02:43:33.113945   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 02:43:33.754179   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 02:43:35.034304   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 02:43:37.594539   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 02:43:42.714860   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 02:43:52.955103   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 02:44:13.435219   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 02:44:54.395293   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-138000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m39.107226685s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-138000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18222
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18222-9538/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18222-9538/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-138000 in cluster ingress-addon-legacy-138000
	* Pulling base image v0.0.42-1708008208-17936 ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 25.0.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0226 02:41:08.332202   12956 out.go:291] Setting OutFile to fd 1 ...
	I0226 02:41:08.332458   12956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 02:41:08.332464   12956 out.go:304] Setting ErrFile to fd 2...
	I0226 02:41:08.332467   12956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 02:41:08.332665   12956 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18222-9538/.minikube/bin
	I0226 02:41:08.334107   12956 out.go:298] Setting JSON to false
	I0226 02:41:08.359686   12956 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":9639,"bootTime":1708934429,"procs":440,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0226 02:41:08.359776   12956 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0226 02:41:08.380960   12956 out.go:177] * [ingress-addon-legacy-138000] minikube v1.32.0 on Darwin 14.3.1
	I0226 02:41:08.422757   12956 out.go:177]   - MINIKUBE_LOCATION=18222
	I0226 02:41:08.422786   12956 notify.go:220] Checking for updates...
	I0226 02:41:08.465801   12956 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18222-9538/kubeconfig
	I0226 02:41:08.486676   12956 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0226 02:41:08.507976   12956 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 02:41:08.528796   12956 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18222-9538/.minikube
	I0226 02:41:08.549574   12956 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0226 02:41:08.573250   12956 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 02:41:08.629327   12956 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0226 02:41:08.629482   12956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 02:41:08.729705   12956 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:110 SystemTime:2024-02-26 10:41:08.718875515 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0226 02:41:08.750761   12956 out.go:177] * Using the docker driver based on user configuration
	I0226 02:41:08.792844   12956 start.go:299] selected driver: docker
	I0226 02:41:08.792863   12956 start.go:903] validating driver "docker" against <nil>
	I0226 02:41:08.792877   12956 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0226 02:41:08.797459   12956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 02:41:08.896745   12956 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:110 SystemTime:2024-02-26 10:41:08.886452523 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0226 02:41:08.896934   12956 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0226 02:41:08.897135   12956 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0226 02:41:08.918586   12956 out.go:177] * Using Docker Desktop driver with root privileges
	I0226 02:41:08.939575   12956 cni.go:84] Creating CNI manager for ""
	I0226 02:41:08.939599   12956 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0226 02:41:08.939613   12956 start_flags.go:323] config:
	{Name:ingress-addon-legacy-138000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-138000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 02:41:08.962703   12956 out.go:177] * Starting control plane node ingress-addon-legacy-138000 in cluster ingress-addon-legacy-138000
	I0226 02:41:09.004478   12956 cache.go:121] Beginning downloading kic base image for docker with docker
	I0226 02:41:09.025511   12956 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0226 02:41:09.067580   12956 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0226 02:41:09.067623   12956 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0226 02:41:09.119319   12956 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0226 02:41:09.119358   12956 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0226 02:41:09.362782   12956 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0226 02:41:09.362835   12956 cache.go:56] Caching tarball of preloaded images
	I0226 02:41:09.363930   12956 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0226 02:41:09.384905   12956 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0226 02:41:09.426582   12956 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0226 02:41:10.006171   12956 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0226 02:41:29.116993   12956 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0226 02:41:29.117559   12956 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0226 02:41:29.708034   12956 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0226 02:41:29.708295   12956 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/config.json ...
	I0226 02:41:29.708322   12956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/config.json: {Name:mk8ac7ec1fa1fe03846778d935a41a5d30088c72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 02:41:29.709980   12956 cache.go:194] Successfully downloaded all kic artifacts
	I0226 02:41:29.710015   12956 start.go:365] acquiring machines lock for ingress-addon-legacy-138000: {Name:mk92e967781564262689291af39d6cffbe63fff2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0226 02:41:29.710229   12956 start.go:369] acquired machines lock for "ingress-addon-legacy-138000" in 203.659µs
	I0226 02:41:29.710420   12956 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-138000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-138000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0226 02:41:29.710476   12956 start.go:125] createHost starting for "" (driver="docker")
	I0226 02:41:29.736485   12956 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0226 02:41:29.736756   12956 start.go:159] libmachine.API.Create for "ingress-addon-legacy-138000" (driver="docker")
	I0226 02:41:29.736797   12956 client.go:168] LocalClient.Create starting
	I0226 02:41:29.736968   12956 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem
	I0226 02:41:29.737056   12956 main.go:141] libmachine: Decoding PEM data...
	I0226 02:41:29.737086   12956 main.go:141] libmachine: Parsing certificate...
	I0226 02:41:29.737188   12956 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/cert.pem
	I0226 02:41:29.737262   12956 main.go:141] libmachine: Decoding PEM data...
	I0226 02:41:29.737282   12956 main.go:141] libmachine: Parsing certificate...
	I0226 02:41:29.757698   12956 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-138000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0226 02:41:29.809760   12956 cli_runner.go:211] docker network inspect ingress-addon-legacy-138000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0226 02:41:29.809871   12956 network_create.go:281] running [docker network inspect ingress-addon-legacy-138000] to gather additional debugging logs...
	I0226 02:41:29.809888   12956 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-138000
	W0226 02:41:29.860297   12956 cli_runner.go:211] docker network inspect ingress-addon-legacy-138000 returned with exit code 1
	I0226 02:41:29.860349   12956 network_create.go:284] error running [docker network inspect ingress-addon-legacy-138000]: docker network inspect ingress-addon-legacy-138000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-138000 not found
	I0226 02:41:29.860371   12956 network_create.go:286] output of [docker network inspect ingress-addon-legacy-138000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-138000 not found
	
	** /stderr **
	I0226 02:41:29.860544   12956 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0226 02:41:29.911405   12956 network.go:207] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00228f180}
	I0226 02:41:29.911449   12956 network_create.go:124] attempt to create docker network ingress-addon-legacy-138000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
	I0226 02:41:29.911519   12956 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-138000 ingress-addon-legacy-138000
	I0226 02:41:29.999005   12956 network_create.go:108] docker network ingress-addon-legacy-138000 192.168.49.0/24 created
	I0226 02:41:29.999054   12956 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-138000" container
	I0226 02:41:29.999170   12956 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0226 02:41:30.049038   12956 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-138000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-138000 --label created_by.minikube.sigs.k8s.io=true
	I0226 02:41:30.100416   12956 oci.go:103] Successfully created a docker volume ingress-addon-legacy-138000
	I0226 02:41:30.100576   12956 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-138000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-138000 --entrypoint /usr/bin/test -v ingress-addon-legacy-138000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib
	I0226 02:41:30.524487   12956 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-138000
	I0226 02:41:30.524530   12956 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0226 02:41:30.524546   12956 kic.go:194] Starting extracting preloaded images to volume ...
	I0226 02:41:30.524656   12956 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-138000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir
	I0226 02:41:33.272101   12956 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-138000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir: (2.747380863s)
	I0226 02:41:33.272128   12956 kic.go:203] duration metric: took 2.747584 seconds to extract preloaded images to volume
	I0226 02:41:33.272245   12956 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0226 02:41:33.374571   12956 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-138000 --name ingress-addon-legacy-138000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-138000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-138000 --network ingress-addon-legacy-138000 --ip 192.168.49.2 --volume ingress-addon-legacy-138000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf
	I0226 02:41:33.638000   12956 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-138000 --format={{.State.Running}}
	I0226 02:41:33.690776   12956 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-138000 --format={{.State.Status}}
	I0226 02:41:33.745393   12956 cli_runner.go:164] Run: docker exec ingress-addon-legacy-138000 stat /var/lib/dpkg/alternatives/iptables
	I0226 02:41:33.876006   12956 oci.go:144] the created container "ingress-addon-legacy-138000" has a running status.
	I0226 02:41:33.876053   12956 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/ingress-addon-legacy-138000/id_rsa...
	I0226 02:41:33.989014   12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/ingress-addon-legacy-138000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0226 02:41:33.989117   12956 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/ingress-addon-legacy-138000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0226 02:41:34.050381   12956 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-138000 --format={{.State.Status}}
	I0226 02:41:34.105577   12956 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0226 02:41:34.105602   12956 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-138000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0226 02:41:34.216841   12956 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-138000 --format={{.State.Status}}
	I0226 02:41:34.267859   12956 machine.go:88] provisioning docker machine ...
	I0226 02:41:34.267919   12956 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-138000"
	I0226 02:41:34.268023   12956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-138000
	I0226 02:41:34.319800   12956 main.go:141] libmachine: Using SSH client type: native
	I0226 02:41:34.320035   12956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6c88920] 0x6c8b680 <nil>  [] 0s} 127.0.0.1 58410 <nil> <nil>}
	I0226 02:41:34.320054   12956 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-138000 && echo "ingress-addon-legacy-138000" | sudo tee /etc/hostname
	I0226 02:41:34.477398   12956 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-138000
	
	I0226 02:41:34.477497   12956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-138000
	I0226 02:41:34.528777   12956 main.go:141] libmachine: Using SSH client type: native
	I0226 02:41:34.528968   12956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6c88920] 0x6c8b680 <nil>  [] 0s} 127.0.0.1 58410 <nil> <nil>}
	I0226 02:41:34.528983   12956 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-138000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-138000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-138000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0226 02:41:34.662731   12956 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0226 02:41:34.662758   12956 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/18222-9538/.minikube CaCertPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18222-9538/.minikube}
	I0226 02:41:34.662787   12956 ubuntu.go:177] setting up certificates
	I0226 02:41:34.662798   12956 provision.go:83] configureAuth start
	I0226 02:41:34.662870   12956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-138000
	I0226 02:41:34.713998   12956 provision.go:138] copyHostCerts
	I0226 02:41:34.714042   12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.pem
	I0226 02:41:34.714097   12956 exec_runner.go:144] found /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.pem, removing ...
	I0226 02:41:34.714107   12956 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.pem
	I0226 02:41:34.714256   12956 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.pem (1082 bytes)
	I0226 02:41:34.714435   12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18222-9538/.minikube/cert.pem
	I0226 02:41:34.714463   12956 exec_runner.go:144] found /Users/jenkins/minikube-integration/18222-9538/.minikube/cert.pem, removing ...
	I0226 02:41:34.714468   12956 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18222-9538/.minikube/cert.pem
	I0226 02:41:34.714582   12956 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18222-9538/.minikube/cert.pem (1123 bytes)
	I0226 02:41:34.714743   12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18222-9538/.minikube/key.pem
	I0226 02:41:34.714782   12956 exec_runner.go:144] found /Users/jenkins/minikube-integration/18222-9538/.minikube/key.pem, removing ...
	I0226 02:41:34.714787   12956 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18222-9538/.minikube/key.pem
	I0226 02:41:34.714863   12956 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18222-9538/.minikube/key.pem (1675 bytes)
	I0226 02:41:34.715042   12956 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-138000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-138000]
	I0226 02:41:34.918394   12956 provision.go:172] copyRemoteCerts
	I0226 02:41:34.919121   12956 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0226 02:41:34.919187   12956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-138000
	I0226 02:41:34.969809   12956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58410 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/ingress-addon-legacy-138000/id_rsa Username:docker}
	I0226 02:41:35.071026   12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0226 02:41:35.071087   12956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0226 02:41:35.113193   12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0226 02:41:35.113283   12956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0226 02:41:35.154779   12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0226 02:41:35.154841   12956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0226 02:41:35.196322   12956 provision.go:86] duration metric: configureAuth took 533.510571ms
	I0226 02:41:35.196344   12956 ubuntu.go:193] setting minikube options for container-runtime
	I0226 02:41:35.196493   12956 config.go:182] Loaded profile config "ingress-addon-legacy-138000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0226 02:41:35.196565   12956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-138000
	I0226 02:41:35.247167   12956 main.go:141] libmachine: Using SSH client type: native
	I0226 02:41:35.247359   12956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6c88920] 0x6c8b680 <nil>  [] 0s} 127.0.0.1 58410 <nil> <nil>}
	I0226 02:41:35.247374   12956 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0226 02:41:35.384528   12956 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0226 02:41:35.384548   12956 ubuntu.go:71] root file system type: overlay
	I0226 02:41:35.384667   12956 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0226 02:41:35.384749   12956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-138000
	I0226 02:41:35.435243   12956 main.go:141] libmachine: Using SSH client type: native
	I0226 02:41:35.435442   12956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6c88920] 0x6c8b680 <nil>  [] 0s} 127.0.0.1 58410 <nil> <nil>}
	I0226 02:41:35.435494   12956 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0226 02:41:35.593641   12956 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0226 02:41:35.593742   12956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-138000
	I0226 02:41:35.644499   12956 main.go:141] libmachine: Using SSH client type: native
	I0226 02:41:35.644678   12956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6c88920] 0x6c8b680 <nil>  [] 0s} 127.0.0.1 58410 <nil> <nil>}
	I0226 02:41:35.644694   12956 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0226 02:41:36.284982   12956 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-02-06 21:12:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-26 10:41:35.588250588 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0226 02:41:36.285003   12956 machine.go:91] provisioned docker machine in 2.017110554s
	I0226 02:41:36.285014   12956 client.go:171] LocalClient.Create took 6.548209322s
	I0226 02:41:36.285031   12956 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-138000" took 6.548279915s
	I0226 02:41:36.285039   12956 start.go:300] post-start starting for "ingress-addon-legacy-138000" (driver="docker")
	I0226 02:41:36.285046   12956 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0226 02:41:36.285104   12956 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0226 02:41:36.285174   12956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-138000
	I0226 02:41:36.336735   12956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58410 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/ingress-addon-legacy-138000/id_rsa Username:docker}
	I0226 02:41:36.440492   12956 ssh_runner.go:195] Run: cat /etc/os-release
	I0226 02:41:36.444775   12956 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0226 02:41:36.444801   12956 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0226 02:41:36.444808   12956 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0226 02:41:36.444813   12956 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0226 02:41:36.444823   12956 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18222-9538/.minikube/addons for local assets ...
	I0226 02:41:36.444911   12956 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18222-9538/.minikube/files for local assets ...
	I0226 02:41:36.445372   12956 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem -> 100262.pem in /etc/ssl/certs
	I0226 02:41:36.445387   12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem -> /etc/ssl/certs/100262.pem
	I0226 02:41:36.445592   12956 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0226 02:41:36.460969   12956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem --> /etc/ssl/certs/100262.pem (1708 bytes)
	I0226 02:41:36.502415   12956 start.go:303] post-start completed in 217.354587ms
	I0226 02:41:36.503053   12956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-138000
	I0226 02:41:36.554018   12956 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/config.json ...
	I0226 02:41:36.554686   12956 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0226 02:41:36.554758   12956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-138000
	I0226 02:41:36.605003   12956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58410 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/ingress-addon-legacy-138000/id_rsa Username:docker}
	I0226 02:41:36.695934   12956 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0226 02:41:36.701011   12956 start.go:128] duration metric: createHost completed in 6.990524768s
	I0226 02:41:36.701031   12956 start.go:83] releasing machines lock for "ingress-addon-legacy-138000", held for 6.990796903s
	I0226 02:41:36.701115   12956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-138000
	I0226 02:41:36.751911   12956 ssh_runner.go:195] Run: cat /version.json
	I0226 02:41:36.751988   12956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-138000
	I0226 02:41:36.752494   12956 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0226 02:41:36.752739   12956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-138000
	I0226 02:41:36.805879   12956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58410 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/ingress-addon-legacy-138000/id_rsa Username:docker}
	I0226 02:41:36.806020   12956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58410 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/ingress-addon-legacy-138000/id_rsa Username:docker}
	I0226 02:41:36.896719   12956 ssh_runner.go:195] Run: systemctl --version
	I0226 02:41:36.997165   12956 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0226 02:41:37.003286   12956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0226 02:41:37.046010   12956 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0226 02:41:37.046073   12956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0226 02:41:37.075530   12956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0226 02:41:37.104593   12956 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0226 02:41:37.104613   12956 start.go:475] detecting cgroup driver to use...
	I0226 02:41:37.104625   12956 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0226 02:41:37.104726   12956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0226 02:41:37.132785   12956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0226 02:41:37.149959   12956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0226 02:41:37.167086   12956 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0226 02:41:37.167138   12956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0226 02:41:37.183103   12956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0226 02:41:37.199898   12956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0226 02:41:37.216914   12956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0226 02:41:37.232787   12956 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0226 02:41:37.249132   12956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0226 02:41:37.265946   12956 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0226 02:41:37.282034   12956 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0226 02:41:37.297098   12956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 02:41:37.359583   12956 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0226 02:41:37.453036   12956 start.go:475] detecting cgroup driver to use...
	I0226 02:41:37.453057   12956 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0226 02:41:37.453110   12956 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0226 02:41:37.471853   12956 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0226 02:41:37.471916   12956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0226 02:41:37.491019   12956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0226 02:41:37.522894   12956 ssh_runner.go:195] Run: which cri-dockerd
	I0226 02:41:37.527486   12956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0226 02:41:37.543712   12956 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0226 02:41:37.574316   12956 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0226 02:41:37.640261   12956 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0226 02:41:37.721344   12956 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0226 02:41:37.721417   12956 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0226 02:41:37.750532   12956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 02:41:37.813807   12956 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0226 02:41:38.063969   12956 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0226 02:41:38.085764   12956 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0226 02:41:38.157718   12956 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 25.0.3 ...
	I0226 02:41:38.157814   12956 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-138000 dig +short host.docker.internal
	I0226 02:41:38.271428   12956 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0226 02:41:38.271908   12956 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0226 02:41:38.276442   12956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 02:41:38.294118   12956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-138000
	I0226 02:41:38.363031   12956 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0226 02:41:38.363129   12956 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0226 02:41:38.381370   12956 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0226 02:41:38.381383   12956 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0226 02:41:38.381441   12956 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0226 02:41:38.396873   12956 ssh_runner.go:195] Run: which lz4
	I0226 02:41:38.401166   12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0226 02:41:38.401456   12956 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0226 02:41:38.405696   12956 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0226 02:41:38.405722   12956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
	I0226 02:41:45.081065   12956 docker.go:649] Took 6.679828 seconds to copy over tarball
	I0226 02:41:45.081136   12956 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0226 02:41:46.784820   12956 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.703664141s)
	I0226 02:41:46.784845   12956 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0226 02:41:46.838243   12956 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0226 02:41:46.853946   12956 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0226 02:41:46.883619   12956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 02:41:46.945748   12956 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0226 02:41:48.289435   12956 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.343665647s)
	I0226 02:41:48.289539   12956 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0226 02:41:48.306072   12956 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0226 02:41:48.306094   12956 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0226 02:41:48.306102   12956 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0226 02:41:48.311155   12956 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0226 02:41:48.311448   12956 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0226 02:41:48.312194   12956 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0226 02:41:48.312221   12956 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0226 02:41:48.312264   12956 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0226 02:41:48.313284   12956 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 02:41:48.313446   12956 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0226 02:41:48.313606   12956 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0226 02:41:48.317125   12956 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0226 02:41:48.318490   12956 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0226 02:41:48.319108   12956 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0226 02:41:48.319567   12956 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0226 02:41:48.319717   12956 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0226 02:41:48.319794   12956 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0226 02:41:48.319953   12956 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0226 02:41:48.320421   12956 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 02:41:50.287310   12956 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0226 02:41:50.304715   12956 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0226 02:41:50.304750   12956 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0226 02:41:50.304806   12956 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0226 02:41:50.322604   12956 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0226 02:41:50.349691   12956 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0226 02:41:50.370187   12956 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0226 02:41:50.370214   12956 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0226 02:41:50.370268   12956 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0226 02:41:50.387021   12956 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0226 02:41:50.410336   12956 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0226 02:41:50.412691   12956 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0226 02:41:50.421526   12956 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0226 02:41:50.429881   12956 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0226 02:41:50.429917   12956 docker.go:337] Removing image: registry.k8s.io/pause:3.2
	I0226 02:41:50.429986   12956 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0226 02:41:50.432394   12956 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0226 02:41:50.432504   12956 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0226 02:41:50.432532   12956 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0226 02:41:50.432575   12956 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0226 02:41:50.437722   12956 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0226 02:41:50.442709   12956 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0226 02:41:50.442741   12956 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0226 02:41:50.442830   12956 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0226 02:41:50.451329   12956 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0226 02:41:50.452666   12956 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0226 02:41:50.452829   12956 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0226 02:41:50.452866   12956 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0226 02:41:50.452967   12956 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0226 02:41:50.462701   12956 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0226 02:41:50.462735   12956 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.7
	I0226 02:41:50.462791   12956 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0226 02:41:50.466382   12956 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0226 02:41:50.476102   12956 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0226 02:41:50.480495   12956 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0226 02:41:51.136424   12956 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 02:41:51.155806   12956 cache_images.go:92] LoadImages completed in 2.849692645s
	W0226 02:41:51.155847   12956 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	I0226 02:41:51.155920   12956 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0226 02:41:51.204899   12956 cni.go:84] Creating CNI manager for ""
	I0226 02:41:51.204917   12956 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0226 02:41:51.204933   12956 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0226 02:41:51.204946   12956 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-138000 NodeName:ingress-addon-legacy-138000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0226 02:41:51.205039   12956 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-138000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0226 02:41:51.205101   12956 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-138000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-138000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0226 02:41:51.205152   12956 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0226 02:41:51.220959   12956 binaries.go:44] Found k8s binaries, skipping transfer
	I0226 02:41:51.221011   12956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0226 02:41:51.236801   12956 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0226 02:41:51.265980   12956 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0226 02:41:51.295339   12956 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I0226 02:41:51.325584   12956 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0226 02:41:51.329712   12956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 02:41:51.346864   12956 certs.go:56] Setting up /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000 for IP: 192.168.49.2
	I0226 02:41:51.346886   12956 certs.go:190] acquiring lock for shared ca certs: {Name:mkac1efdcc7c5f1039385f86b148562f7ea05475 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 02:41:51.347081   12956 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.key
	I0226 02:41:51.347148   12956 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18222-9538/.minikube/proxy-client-ca.key
	I0226 02:41:51.347194   12956 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/client.key
	I0226 02:41:51.347212   12956 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/client.crt with IP's: []
	I0226 02:41:51.450510   12956 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/client.crt ...
	I0226 02:41:51.450525   12956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/client.crt: {Name:mk162d6e138029c5409501d0c37715272ac2978c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 02:41:51.451195   12956 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/client.key ...
	I0226 02:41:51.451210   12956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/client.key: {Name:mk64327ecc8de853fa995d7915c82afaca08b48f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 02:41:51.452205   12956 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/apiserver.key.dd3b5fb2
	I0226 02:41:51.452233   12956 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0226 02:41:51.557181   12956 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/apiserver.crt.dd3b5fb2 ...
	I0226 02:41:51.557196   12956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/apiserver.crt.dd3b5fb2: {Name:mk245917daff69f5757c601893aa6619e282cc04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 02:41:51.558128   12956 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/apiserver.key.dd3b5fb2 ...
	I0226 02:41:51.558138   12956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/apiserver.key.dd3b5fb2: {Name:mkae51405217599d74520e6e64596809250777dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 02:41:51.558772   12956 certs.go:337] copying /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/apiserver.crt
	I0226 02:41:51.558956   12956 certs.go:341] copying /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/apiserver.key
	I0226 02:41:51.559123   12956 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/proxy-client.key
	I0226 02:41:51.559140   12956 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/proxy-client.crt with IP's: []
	I0226 02:41:51.782261   12956 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/proxy-client.crt ...
	I0226 02:41:51.782277   12956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/proxy-client.crt: {Name:mk5a3c2f0dbf18bf73ac054c18315e7a14d6c490 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 02:41:51.782934   12956 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/proxy-client.key ...
	I0226 02:41:51.782945   12956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/proxy-client.key: {Name:mkc7ee880fc47b164b9c5d34f1104682413a395b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 02:41:51.783392   12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0226 02:41:51.783422   12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0226 02:41:51.783449   12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0226 02:41:51.783469   12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0226 02:41:51.783487   12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0226 02:41:51.783504   12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0226 02:41:51.783523   12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0226 02:41:51.783539   12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0226 02:41:51.783890   12956 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/10026.pem (1338 bytes)
	W0226 02:41:51.783958   12956 certs.go:433] ignoring /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/10026_empty.pem, impossibly tiny 0 bytes
	I0226 02:41:51.783967   12956 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca-key.pem (1675 bytes)
	I0226 02:41:51.783999   12956 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem (1082 bytes)
	I0226 02:41:51.784027   12956 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/cert.pem (1123 bytes)
	I0226 02:41:51.784069   12956 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/key.pem (1675 bytes)
	I0226 02:41:51.784133   12956 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem (1708 bytes)
	I0226 02:41:51.784169   12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem -> /usr/share/ca-certificates/100262.pem
	I0226 02:41:51.784188   12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0226 02:41:51.784202   12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/10026.pem -> /usr/share/ca-certificates/10026.pem
	I0226 02:41:51.784683   12956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0226 02:41:51.826749   12956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0226 02:41:51.868923   12956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0226 02:41:51.908670   12956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0226 02:41:51.948507   12956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0226 02:41:51.989144   12956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0226 02:41:52.029854   12956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0226 02:41:52.071476   12956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0226 02:41:52.111911   12956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem --> /usr/share/ca-certificates/100262.pem (1708 bytes)
	I0226 02:41:52.153243   12956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0226 02:41:52.194681   12956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/10026.pem --> /usr/share/ca-certificates/10026.pem (1338 bytes)
	I0226 02:41:52.235575   12956 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0226 02:41:52.265333   12956 ssh_runner.go:195] Run: openssl version
	I0226 02:41:52.271596   12956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10026.pem && ln -fs /usr/share/ca-certificates/10026.pem /etc/ssl/certs/10026.pem"
	I0226 02:41:52.287681   12956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10026.pem
	I0226 02:41:52.292031   12956 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 26 10:36 /usr/share/ca-certificates/10026.pem
	I0226 02:41:52.292081   12956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10026.pem
	I0226 02:41:52.298626   12956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10026.pem /etc/ssl/certs/51391683.0"
	I0226 02:41:52.315093   12956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100262.pem && ln -fs /usr/share/ca-certificates/100262.pem /etc/ssl/certs/100262.pem"
	I0226 02:41:52.330885   12956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100262.pem
	I0226 02:41:52.335099   12956 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 26 10:36 /usr/share/ca-certificates/100262.pem
	I0226 02:41:52.335154   12956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100262.pem
	I0226 02:41:52.341495   12956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100262.pem /etc/ssl/certs/3ec20f2e.0"
	I0226 02:41:52.357293   12956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0226 02:41:52.373815   12956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0226 02:41:52.377855   12956 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 26 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0226 02:41:52.377896   12956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0226 02:41:52.384159   12956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0226 02:41:52.399866   12956 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0226 02:41:52.404034   12956 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0226 02:41:52.404079   12956 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-138000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-138000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 02:41:52.404177   12956 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0226 02:41:52.420857   12956 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0226 02:41:52.435699   12956 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0226 02:41:52.450032   12956 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0226 02:41:52.450099   12956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0226 02:41:52.465742   12956 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0226 02:41:52.465769   12956 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0226 02:41:52.518292   12956 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0226 02:41:52.518353   12956 kubeadm.go:322] [preflight] Running pre-flight checks
	I0226 02:41:52.754643   12956 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0226 02:41:52.754723   12956 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0226 02:41:52.754804   12956 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0226 02:41:52.967176   12956 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0226 02:41:52.967825   12956 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0226 02:41:52.967876   12956 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0226 02:41:53.041252   12956 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0226 02:41:53.062666   12956 out.go:204]   - Generating certificates and keys ...
	I0226 02:41:53.062750   12956 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0226 02:41:53.062810   12956 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0226 02:41:53.117652   12956 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0226 02:41:53.240616   12956 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0226 02:41:53.490408   12956 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0226 02:41:53.651901   12956 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0226 02:41:53.787493   12956 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0226 02:41:53.787621   12956 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-138000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0226 02:41:53.919830   12956 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0226 02:41:53.919943   12956 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-138000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0226 02:41:54.039569   12956 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0226 02:41:54.245122   12956 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0226 02:41:54.341299   12956 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0226 02:41:54.341353   12956 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0226 02:41:54.482884   12956 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0226 02:41:54.598510   12956 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0226 02:41:54.795980   12956 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0226 02:41:54.842442   12956 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0226 02:41:54.842922   12956 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0226 02:41:54.864524   12956 out.go:204]   - Booting up control plane ...
	I0226 02:41:54.864658   12956 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0226 02:41:54.864779   12956 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0226 02:41:54.864894   12956 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0226 02:41:54.865045   12956 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0226 02:41:54.865300   12956 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0226 02:42:34.850883   12956 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0226 02:42:34.851576   12956 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 02:42:34.851743   12956 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 02:42:39.853214   12956 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 02:42:39.853371   12956 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 02:42:49.854432   12956 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 02:42:49.854597   12956 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 02:43:09.855445   12956 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 02:43:09.855610   12956 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 02:43:49.862209   12956 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 02:43:49.862393   12956 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 02:43:49.862411   12956 kubeadm.go:322] 
	I0226 02:43:49.862448   12956 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0226 02:43:49.862536   12956 kubeadm.go:322] 		timed out waiting for the condition
	I0226 02:43:49.862549   12956 kubeadm.go:322] 
	I0226 02:43:49.862588   12956 kubeadm.go:322] 	This error is likely caused by:
	I0226 02:43:49.862633   12956 kubeadm.go:322] 		- The kubelet is not running
	I0226 02:43:49.862722   12956 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0226 02:43:49.862730   12956 kubeadm.go:322] 
	I0226 02:43:49.862819   12956 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0226 02:43:49.862854   12956 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0226 02:43:49.862895   12956 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0226 02:43:49.862904   12956 kubeadm.go:322] 
	I0226 02:43:49.863015   12956 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0226 02:43:49.863101   12956 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0226 02:43:49.863114   12956 kubeadm.go:322] 
	I0226 02:43:49.863189   12956 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0226 02:43:49.863241   12956 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0226 02:43:49.863301   12956 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0226 02:43:49.863327   12956 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0226 02:43:49.863331   12956 kubeadm.go:322] 
	I0226 02:43:49.867677   12956 kubeadm.go:322] W0226 10:41:52.517631    1763 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0226 02:43:49.867838   12956 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0226 02:43:49.867937   12956 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0226 02:43:49.868047   12956 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
	I0226 02:43:49.868159   12956 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0226 02:43:49.868264   12956 kubeadm.go:322] W0226 10:41:54.846425    1763 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0226 02:43:49.868369   12956 kubeadm.go:322] W0226 10:41:54.847975    1763 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0226 02:43:49.868434   12956 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0226 02:43:49.868502   12956 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0226 02:43:49.868633   12956 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-138000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-138000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0226 10:41:52.517631    1763 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0226 10:41:54.846425    1763 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0226 10:41:54.847975    1763 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-138000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-138000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0226 10:41:52.517631    1763 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0226 10:41:54.846425    1763 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0226 10:41:54.847975    1763 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0226 02:43:49.868667   12956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0226 02:43:50.286482   12956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 02:43:50.304515   12956 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0226 02:43:50.304571   12956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0226 02:43:50.320041   12956 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0226 02:43:50.320067   12956 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0226 02:43:50.370901   12956 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0226 02:43:50.370963   12956 kubeadm.go:322] [preflight] Running pre-flight checks
	I0226 02:43:50.607495   12956 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0226 02:43:50.607583   12956 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0226 02:43:50.607661   12956 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0226 02:43:50.767914   12956 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0226 02:43:50.769170   12956 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0226 02:43:50.769289   12956 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0226 02:43:50.845255   12956 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0226 02:43:50.866735   12956 out.go:204]   - Generating certificates and keys ...
	I0226 02:43:50.866841   12956 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0226 02:43:50.866922   12956 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0226 02:43:50.867033   12956 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0226 02:43:50.867128   12956 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0226 02:43:50.867187   12956 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0226 02:43:50.867276   12956 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0226 02:43:50.867370   12956 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0226 02:43:50.867448   12956 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0226 02:43:50.867561   12956 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0226 02:43:50.867634   12956 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0226 02:43:50.867670   12956 kubeadm.go:322] [certs] Using the existing "sa" key
	I0226 02:43:50.867718   12956 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0226 02:43:51.084598   12956 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0226 02:43:51.226324   12956 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0226 02:43:51.385157   12956 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0226 02:43:51.828237   12956 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0226 02:43:51.829687   12956 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0226 02:43:51.849576   12956 out.go:204]   - Booting up control plane ...
	I0226 02:43:51.849653   12956 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0226 02:43:51.849718   12956 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0226 02:43:51.849784   12956 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0226 02:43:51.849865   12956 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0226 02:43:51.849999   12956 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0226 02:44:31.838370   12956 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0226 02:44:31.838699   12956 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 02:44:31.838852   12956 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 02:44:36.839843   12956 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 02:44:36.840005   12956 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 02:44:46.841271   12956 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 02:44:46.841439   12956 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 02:45:06.842310   12956 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 02:45:06.842456   12956 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 02:45:46.845228   12956 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 02:45:46.845459   12956 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 02:45:46.845472   12956 kubeadm.go:322] 
	I0226 02:45:46.845511   12956 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0226 02:45:46.845572   12956 kubeadm.go:322] 		timed out waiting for the condition
	I0226 02:45:46.845588   12956 kubeadm.go:322] 
	I0226 02:45:46.845646   12956 kubeadm.go:322] 	This error is likely caused by:
	I0226 02:45:46.845685   12956 kubeadm.go:322] 		- The kubelet is not running
	I0226 02:45:46.845824   12956 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0226 02:45:46.845842   12956 kubeadm.go:322] 
	I0226 02:45:46.845972   12956 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0226 02:45:46.846015   12956 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0226 02:45:46.846050   12956 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0226 02:45:46.846055   12956 kubeadm.go:322] 
	I0226 02:45:46.846194   12956 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0226 02:45:46.846283   12956 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0226 02:45:46.846292   12956 kubeadm.go:322] 
	I0226 02:45:46.846394   12956 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0226 02:45:46.846464   12956 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0226 02:45:46.846549   12956 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0226 02:45:46.846583   12956 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0226 02:45:46.846590   12956 kubeadm.go:322] 
	I0226 02:45:46.851149   12956 kubeadm.go:322] W0226 10:43:50.370203    4762 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0226 02:45:46.851290   12956 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0226 02:45:46.851355   12956 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0226 02:45:46.851494   12956 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
	I0226 02:45:46.851583   12956 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0226 02:45:46.851690   12956 kubeadm.go:322] W0226 10:43:51.833273    4762 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0226 02:45:46.851812   12956 kubeadm.go:322] W0226 10:43:51.834003    4762 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0226 02:45:46.851886   12956 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0226 02:45:46.851970   12956 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0226 02:45:46.852006   12956 kubeadm.go:406] StartCluster complete in 3m54.448032528s
	I0226 02:45:46.853836   12956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 02:45:46.872150   12956 logs.go:276] 0 containers: []
	W0226 02:45:46.872165   12956 logs.go:278] No container was found matching "kube-apiserver"
	I0226 02:45:46.872229   12956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 02:45:46.888276   12956 logs.go:276] 0 containers: []
	W0226 02:45:46.888292   12956 logs.go:278] No container was found matching "etcd"
	I0226 02:45:46.888366   12956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 02:45:46.904401   12956 logs.go:276] 0 containers: []
	W0226 02:45:46.904416   12956 logs.go:278] No container was found matching "coredns"
	I0226 02:45:46.904484   12956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 02:45:46.920274   12956 logs.go:276] 0 containers: []
	W0226 02:45:46.920291   12956 logs.go:278] No container was found matching "kube-scheduler"
	I0226 02:45:46.920371   12956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 02:45:46.936579   12956 logs.go:276] 0 containers: []
	W0226 02:45:46.936595   12956 logs.go:278] No container was found matching "kube-proxy"
	I0226 02:45:46.936676   12956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 02:45:46.954227   12956 logs.go:276] 0 containers: []
	W0226 02:45:46.954242   12956 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 02:45:46.954310   12956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 02:45:46.971521   12956 logs.go:276] 0 containers: []
	W0226 02:45:46.971537   12956 logs.go:278] No container was found matching "kindnet"
	I0226 02:45:46.971545   12956 logs.go:123] Gathering logs for kubelet ...
	I0226 02:45:46.971551   12956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 02:45:47.013149   12956 logs.go:123] Gathering logs for dmesg ...
	I0226 02:45:47.013167   12956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 02:45:47.033154   12956 logs.go:123] Gathering logs for describe nodes ...
	I0226 02:45:47.033169   12956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 02:45:47.097269   12956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 02:45:47.097287   12956 logs.go:123] Gathering logs for Docker ...
	I0226 02:45:47.097295   12956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 02:45:47.118608   12956 logs.go:123] Gathering logs for container status ...
	I0226 02:45:47.118638   12956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0226 02:45:47.180412   12956 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0226 10:43:50.370203    4762 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0226 10:43:51.833273    4762 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0226 10:43:51.834003    4762 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0226 02:45:47.180436   12956 out.go:239] * 
	* 
	W0226 02:45:47.180470   12956 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0226 10:43:50.370203    4762 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0226 10:43:51.833273    4762 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0226 10:43:51.834003    4762 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0226 10:43:50.370203    4762 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0226 10:43:51.833273    4762 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0226 10:43:51.834003    4762 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0226 02:45:47.180485   12956 out.go:239] * 
	* 
	W0226 02:45:47.181119   12956 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0226 02:45:47.246945   12956 out.go:177] 
	W0226 02:45:47.289975   12956 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0226 10:43:50.370203    4762 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0226 10:43:51.833273    4762 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0226 10:43:51.834003    4762 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0226 10:43:50.370203    4762 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0226 10:43:51.833273    4762 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0226 10:43:51.834003    4762 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0226 02:45:47.290048   12956 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0226 02:45:47.290095   12956 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0226 02:45:47.332822   12956 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-138000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (279.15s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (73.76s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-138000 addons enable ingress --alsologtostderr -v=5
E0226 02:46:16.315563   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-138000 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m13.30845099s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0226 02:45:47.481376   13148 out.go:291] Setting OutFile to fd 1 ...
	I0226 02:45:47.482310   13148 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 02:45:47.482319   13148 out.go:304] Setting ErrFile to fd 2...
	I0226 02:45:47.482325   13148 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 02:45:47.482748   13148 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18222-9538/.minikube/bin
	I0226 02:45:47.483440   13148 mustload.go:65] Loading cluster: ingress-addon-legacy-138000
	I0226 02:45:47.483861   13148 config.go:182] Loaded profile config "ingress-addon-legacy-138000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0226 02:45:47.483879   13148 addons.go:597] checking whether the cluster is paused
	I0226 02:45:47.483962   13148 config.go:182] Loaded profile config "ingress-addon-legacy-138000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0226 02:45:47.483979   13148 host.go:66] Checking if "ingress-addon-legacy-138000" exists ...
	I0226 02:45:47.484565   13148 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-138000 --format={{.State.Status}}
	I0226 02:45:47.643516   13148 ssh_runner.go:195] Run: systemctl --version
	I0226 02:45:47.643599   13148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-138000
	I0226 02:45:47.694762   13148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58410 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/ingress-addon-legacy-138000/id_rsa Username:docker}
	I0226 02:45:47.787653   13148 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0226 02:45:47.826423   13148 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0226 02:45:47.847388   13148 config.go:182] Loaded profile config "ingress-addon-legacy-138000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0226 02:45:47.847410   13148 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-138000"
	I0226 02:45:47.847426   13148 addons.go:234] Setting addon ingress=true in "ingress-addon-legacy-138000"
	I0226 02:45:47.847498   13148 host.go:66] Checking if "ingress-addon-legacy-138000" exists ...
	I0226 02:45:47.848160   13148 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-138000 --format={{.State.Status}}
	I0226 02:45:47.920174   13148 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0226 02:45:47.942232   13148 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I0226 02:45:47.984364   13148 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0226 02:45:48.005041   13148 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0226 02:45:48.026594   13148 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0226 02:45:48.026624   13148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I0226 02:45:48.026766   13148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-138000
	I0226 02:45:48.079067   13148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58410 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/ingress-addon-legacy-138000/id_rsa Username:docker}
	I0226 02:45:48.195827   13148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0226 02:45:48.267505   13148 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:45:48.267539   13148 retry.go:31] will retry after 161.540064ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:45:48.431387   13148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0226 02:45:48.495533   13148 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:45:48.495558   13148 retry.go:31] will retry after 208.910049ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:45:48.706690   13148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0226 02:45:48.762820   13148 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:45:48.762843   13148 retry.go:31] will retry after 283.059712ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:45:49.046212   13148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0226 02:45:49.108983   13148 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:45:49.109001   13148 retry.go:31] will retry after 891.491378ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:45:50.000639   13148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0226 02:45:50.069909   13148 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:45:50.069938   13148 retry.go:31] will retry after 932.32081ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:45:51.002513   13148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0226 02:45:51.066594   13148 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:45:51.066612   13148 retry.go:31] will retry after 2.256735816s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:45:53.325612   13148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0226 02:45:53.391427   13148 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:45:53.391445   13148 retry.go:31] will retry after 4.237105751s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:45:57.629452   13148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0226 02:45:57.686988   13148 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:45:57.687013   13148 retry.go:31] will retry after 3.854191948s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:46:01.541350   13148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0226 02:46:01.612959   13148 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:46:01.612984   13148 retry.go:31] will retry after 9.50869979s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:46:11.122600   13148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0226 02:46:11.182660   13148 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:46:11.182678   13148 retry.go:31] will retry after 7.942448788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:46:19.125730   13148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0226 02:46:19.184252   13148 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:46:19.184270   13148 retry.go:31] will retry after 13.530549216s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:46:32.715696   13148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0226 02:46:32.774527   13148 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:46:32.774548   13148 retry.go:31] will retry after 27.748296849s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:47:00.523349   13148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0226 02:47:00.583802   13148 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:47:00.583832   13148 addons.go:470] Verifying addon ingress=true in "ingress-addon-legacy-138000"
	I0226 02:47:00.604391   13148 out.go:177] * Verifying ingress addon...
	I0226 02:47:00.627385   13148 out.go:177] 
	W0226 02:47:00.648606   13148 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-138000" does not exist: client config: context "ingress-addon-legacy-138000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-138000" does not exist: client config: context "ingress-addon-legacy-138000" does not exist]
	W0226 02:47:00.648634   13148 out.go:239] * 
	* 
	W0226 02:47:00.672064   13148 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0226 02:47:00.699905   13148 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-138000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-138000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8e2c2387a875e034ea1056a2f3cbb42b91952547e6cde4002c682392fa857e0d",
	        "Created": "2024-02-26T10:41:33.425139843Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 50127,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-26T10:41:33.630722681Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/8e2c2387a875e034ea1056a2f3cbb42b91952547e6cde4002c682392fa857e0d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8e2c2387a875e034ea1056a2f3cbb42b91952547e6cde4002c682392fa857e0d/hostname",
	        "HostsPath": "/var/lib/docker/containers/8e2c2387a875e034ea1056a2f3cbb42b91952547e6cde4002c682392fa857e0d/hosts",
	        "LogPath": "/var/lib/docker/containers/8e2c2387a875e034ea1056a2f3cbb42b91952547e6cde4002c682392fa857e0d/8e2c2387a875e034ea1056a2f3cbb42b91952547e6cde4002c682392fa857e0d-json.log",
	        "Name": "/ingress-addon-legacy-138000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-138000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-138000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/377f661803c10cfda89f19fff982b3018af308be934fb8743c460e33e18b0a20-init/diff:/var/lib/docker/overlay2/8bb839173c154892efba77c6399a35a6f861ea09086927d7a3ace9b08c2c0425/diff",
	                "MergedDir": "/var/lib/docker/overlay2/377f661803c10cfda89f19fff982b3018af308be934fb8743c460e33e18b0a20/merged",
	                "UpperDir": "/var/lib/docker/overlay2/377f661803c10cfda89f19fff982b3018af308be934fb8743c460e33e18b0a20/diff",
	                "WorkDir": "/var/lib/docker/overlay2/377f661803c10cfda89f19fff982b3018af308be934fb8743c460e33e18b0a20/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-138000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-138000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-138000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-138000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-138000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "037f47f86ea73cc5b8cad6cb1abd2d9e0a8cf63bb1980e2696f2b03c50e89d7d",
	            "SandboxKey": "/var/run/docker/netns/037f47f86ea7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58410"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58411"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58412"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58413"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58414"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-138000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8e2c2387a875",
	                        "ingress-addon-legacy-138000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "68239fd7eb4aa0ea9446ed6ab741c89cb1e63a86411c24bbcee4e3d0a45a2a0e",
	                    "EndpointID": "7ddb8eb8fd202f0c1bb9d356a1cf71722ed4acaff969cf91c902308c3d7f9bb3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ingress-addon-legacy-138000",
	                        "8e2c2387a875"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-138000 -n ingress-addon-legacy-138000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-138000 -n ingress-addon-legacy-138000: exit status 6 (399.374305ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0226 02:47:01.159399   13177 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-138000" does not appear in /Users/jenkins/minikube-integration/18222-9538/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-138000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (73.76s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (101.02s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-138000 addons enable ingress-dns --alsologtostderr -v=5
E0226 02:47:59.877510   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
E0226 02:48:32.475749   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-138000 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m40.565119224s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0226 02:47:01.239525   13187 out.go:291] Setting OutFile to fd 1 ...
	I0226 02:47:01.240392   13187 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 02:47:01.240400   13187 out.go:304] Setting ErrFile to fd 2...
	I0226 02:47:01.240409   13187 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 02:47:01.241286   13187 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18222-9538/.minikube/bin
	I0226 02:47:01.241909   13187 mustload.go:65] Loading cluster: ingress-addon-legacy-138000
	I0226 02:47:01.242176   13187 config.go:182] Loaded profile config "ingress-addon-legacy-138000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0226 02:47:01.242196   13187 addons.go:597] checking whether the cluster is paused
	I0226 02:47:01.242281   13187 config.go:182] Loaded profile config "ingress-addon-legacy-138000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0226 02:47:01.242298   13187 host.go:66] Checking if "ingress-addon-legacy-138000" exists ...
	I0226 02:47:01.242691   13187 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-138000 --format={{.State.Status}}
	I0226 02:47:01.377362   13187 ssh_runner.go:195] Run: systemctl --version
	I0226 02:47:01.377461   13187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-138000
	I0226 02:47:01.427765   13187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58410 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/ingress-addon-legacy-138000/id_rsa Username:docker}
	I0226 02:47:01.521567   13187 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0226 02:47:01.565007   13187 out.go:177] * ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0226 02:47:01.585494   13187 config.go:182] Loaded profile config "ingress-addon-legacy-138000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0226 02:47:01.585508   13187 addons.go:69] Setting ingress-dns=true in profile "ingress-addon-legacy-138000"
	I0226 02:47:01.585517   13187 addons.go:234] Setting addon ingress-dns=true in "ingress-addon-legacy-138000"
	I0226 02:47:01.585542   13187 host.go:66] Checking if "ingress-addon-legacy-138000" exists ...
	I0226 02:47:01.585844   13187 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-138000 --format={{.State.Status}}
	I0226 02:47:01.657815   13187 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0226 02:47:01.682797   13187 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0226 02:47:01.705801   13187 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0226 02:47:01.705834   13187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0226 02:47:01.706016   13187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-138000
	I0226 02:47:01.757761   13187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58410 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/ingress-addon-legacy-138000/id_rsa Username:docker}
	I0226 02:47:01.876836   13187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0226 02:47:01.928613   13187 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:47:01.928641   13187 retry.go:31] will retry after 231.819097ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:47:02.161546   13187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0226 02:47:02.222693   13187 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:47:02.222721   13187 retry.go:31] will retry after 483.278584ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:47:02.706243   13187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0226 02:47:02.765913   13187 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:47:02.765943   13187 retry.go:31] will retry after 455.931604ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:47:03.224189   13187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0226 02:47:03.292332   13187 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:47:03.292360   13187 retry.go:31] will retry after 584.594145ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:47:03.877985   13187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0226 02:47:03.935585   13187 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:47:03.935603   13187 retry.go:31] will retry after 730.169646ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:47:04.666216   13187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0226 02:47:04.723486   13187 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:47:04.723503   13187 retry.go:31] will retry after 2.368327452s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:47:07.093485   13187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0226 02:47:07.156872   13187 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:47:07.156896   13187 retry.go:31] will retry after 1.90898764s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:47:09.066112   13187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0226 02:47:09.124957   13187 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:47:09.124973   13187 retry.go:31] will retry after 3.776029978s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:47:12.901654   13187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0226 02:47:12.961609   13187 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:47:12.961633   13187 retry.go:31] will retry after 8.045189616s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:47:21.008638   13187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0226 02:47:21.067991   13187 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:47:21.068013   13187 retry.go:31] will retry after 10.555046211s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:47:31.625047   13187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0226 02:47:31.689008   13187 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:47:31.689026   13187 retry.go:31] will retry after 16.936980123s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:47:48.628037   13187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0226 02:47:48.691791   13187 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:47:48.691807   13187 retry.go:31] will retry after 23.095927847s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:48:11.788089   13187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0226 02:48:11.847614   13187 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:48:11.847632   13187 retry.go:31] will retry after 29.740265232s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:48:41.588996   13187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0226 02:48:41.642892   13187 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0226 02:48:41.664821   13187 out.go:177] 
	W0226 02:48:41.686729   13187 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0226 02:48:41.686760   13187 out.go:239] * 
	* 
	W0226 02:48:41.691274   13187 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0226 02:48:41.717579   13187 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-138000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-138000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8e2c2387a875e034ea1056a2f3cbb42b91952547e6cde4002c682392fa857e0d",
	        "Created": "2024-02-26T10:41:33.425139843Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 50127,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-26T10:41:33.630722681Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/8e2c2387a875e034ea1056a2f3cbb42b91952547e6cde4002c682392fa857e0d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8e2c2387a875e034ea1056a2f3cbb42b91952547e6cde4002c682392fa857e0d/hostname",
	        "HostsPath": "/var/lib/docker/containers/8e2c2387a875e034ea1056a2f3cbb42b91952547e6cde4002c682392fa857e0d/hosts",
	        "LogPath": "/var/lib/docker/containers/8e2c2387a875e034ea1056a2f3cbb42b91952547e6cde4002c682392fa857e0d/8e2c2387a875e034ea1056a2f3cbb42b91952547e6cde4002c682392fa857e0d-json.log",
	        "Name": "/ingress-addon-legacy-138000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-138000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-138000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/377f661803c10cfda89f19fff982b3018af308be934fb8743c460e33e18b0a20-init/diff:/var/lib/docker/overlay2/8bb839173c154892efba77c6399a35a6f861ea09086927d7a3ace9b08c2c0425/diff",
	                "MergedDir": "/var/lib/docker/overlay2/377f661803c10cfda89f19fff982b3018af308be934fb8743c460e33e18b0a20/merged",
	                "UpperDir": "/var/lib/docker/overlay2/377f661803c10cfda89f19fff982b3018af308be934fb8743c460e33e18b0a20/diff",
	                "WorkDir": "/var/lib/docker/overlay2/377f661803c10cfda89f19fff982b3018af308be934fb8743c460e33e18b0a20/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-138000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-138000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-138000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-138000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-138000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "037f47f86ea73cc5b8cad6cb1abd2d9e0a8cf63bb1980e2696f2b03c50e89d7d",
	            "SandboxKey": "/var/run/docker/netns/037f47f86ea7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58410"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58411"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58412"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58413"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58414"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-138000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8e2c2387a875",
	                        "ingress-addon-legacy-138000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "68239fd7eb4aa0ea9446ed6ab741c89cb1e63a86411c24bbcee4e3d0a45a2a0e",
	                    "EndpointID": "7ddb8eb8fd202f0c1bb9d356a1cf71722ed4acaff969cf91c902308c3d7f9bb3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ingress-addon-legacy-138000",
	                        "8e2c2387a875"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-138000 -n ingress-addon-legacy-138000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-138000 -n ingress-addon-legacy-138000: exit status 6 (401.577157ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0226 02:48:42.179940   13225 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-138000" does not appear in /Users/jenkins/minikube-integration/18222-9538/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-138000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (101.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.45s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:201: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-138000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-138000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8e2c2387a875e034ea1056a2f3cbb42b91952547e6cde4002c682392fa857e0d",
	        "Created": "2024-02-26T10:41:33.425139843Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 50127,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-26T10:41:33.630722681Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/8e2c2387a875e034ea1056a2f3cbb42b91952547e6cde4002c682392fa857e0d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8e2c2387a875e034ea1056a2f3cbb42b91952547e6cde4002c682392fa857e0d/hostname",
	        "HostsPath": "/var/lib/docker/containers/8e2c2387a875e034ea1056a2f3cbb42b91952547e6cde4002c682392fa857e0d/hosts",
	        "LogPath": "/var/lib/docker/containers/8e2c2387a875e034ea1056a2f3cbb42b91952547e6cde4002c682392fa857e0d/8e2c2387a875e034ea1056a2f3cbb42b91952547e6cde4002c682392fa857e0d-json.log",
	        "Name": "/ingress-addon-legacy-138000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-138000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-138000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/377f661803c10cfda89f19fff982b3018af308be934fb8743c460e33e18b0a20-init/diff:/var/lib/docker/overlay2/8bb839173c154892efba77c6399a35a6f861ea09086927d7a3ace9b08c2c0425/diff",
	                "MergedDir": "/var/lib/docker/overlay2/377f661803c10cfda89f19fff982b3018af308be934fb8743c460e33e18b0a20/merged",
	                "UpperDir": "/var/lib/docker/overlay2/377f661803c10cfda89f19fff982b3018af308be934fb8743c460e33e18b0a20/diff",
	                "WorkDir": "/var/lib/docker/overlay2/377f661803c10cfda89f19fff982b3018af308be934fb8743c460e33e18b0a20/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-138000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-138000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-138000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-138000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-138000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "037f47f86ea73cc5b8cad6cb1abd2d9e0a8cf63bb1980e2696f2b03c50e89d7d",
	            "SandboxKey": "/var/run/docker/netns/037f47f86ea7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58410"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58411"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58412"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58413"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58414"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-138000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8e2c2387a875",
	                        "ingress-addon-legacy-138000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "68239fd7eb4aa0ea9446ed6ab741c89cb1e63a86411c24bbcee4e3d0a45a2a0e",
	                    "EndpointID": "7ddb8eb8fd202f0c1bb9d356a1cf71722ed4acaff969cf91c902308c3d7f9bb3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ingress-addon-legacy-138000",
	                        "8e2c2387a875"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-138000 -n ingress-addon-legacy-138000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-138000 -n ingress-addon-legacy-138000: exit status 6 (399.909439ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0226 02:48:42.631468   13237 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-138000" does not appear in /Users/jenkins/minikube-integration/18222-9538/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-138000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.45s)

                                                
                                    
x
+
TestKubernetesUpgrade (405.2s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-909000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-909000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m17.921442017s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-909000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18222
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18222-9538/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18222-9538/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-909000 in cluster kubernetes-upgrade-909000
	* Pulling base image v0.0.42-1708008208-17936 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0226 03:12:07.550967   19293 out.go:291] Setting OutFile to fd 1 ...
	I0226 03:12:07.551448   19293 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 03:12:07.551457   19293 out.go:304] Setting ErrFile to fd 2...
	I0226 03:12:07.551462   19293 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 03:12:07.551720   19293 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18222-9538/.minikube/bin
	I0226 03:12:07.573167   19293 out.go:298] Setting JSON to false
	I0226 03:12:07.598289   19293 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":11498,"bootTime":1708934429,"procs":442,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0226 03:12:07.598371   19293 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0226 03:12:07.636812   19293 out.go:177] * [kubernetes-upgrade-909000] minikube v1.32.0 on Darwin 14.3.1
	I0226 03:12:07.741838   19293 out.go:177]   - MINIKUBE_LOCATION=18222
	I0226 03:12:07.699985   19293 notify.go:220] Checking for updates...
	I0226 03:12:07.783688   19293 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18222-9538/kubeconfig
	I0226 03:12:07.846812   19293 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0226 03:12:07.888616   19293 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 03:12:07.930675   19293 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18222-9538/.minikube
	I0226 03:12:07.972825   19293 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0226 03:12:07.994076   19293 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 03:12:08.056239   19293 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0226 03:12:08.056725   19293 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 03:12:08.178021   19293 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:110 SystemTime:2024-02-26 11:12:08.16735221 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=
cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker De
v Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) f
or an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0226 03:12:08.219349   19293 out.go:177] * Using the docker driver based on user configuration
	I0226 03:12:08.240363   19293 start.go:299] selected driver: docker
	I0226 03:12:08.240378   19293 start.go:903] validating driver "docker" against <nil>
	I0226 03:12:08.240386   19293 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0226 03:12:08.244165   19293 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 03:12:08.440999   19293 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:110 SystemTime:2024-02-26 11:12:08.377412416 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0226 03:12:08.441186   19293 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0226 03:12:08.441442   19293 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0226 03:12:08.462365   19293 out.go:177] * Using Docker Desktop driver with root privileges
	I0226 03:12:08.483256   19293 cni.go:84] Creating CNI manager for ""
	I0226 03:12:08.483282   19293 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0226 03:12:08.483293   19293 start_flags.go:323] config:
	{Name:kubernetes-upgrade-909000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-909000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 03:12:08.504560   19293 out.go:177] * Starting control plane node kubernetes-upgrade-909000 in cluster kubernetes-upgrade-909000
	I0226 03:12:08.548294   19293 cache.go:121] Beginning downloading kic base image for docker with docker
	I0226 03:12:08.569464   19293 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0226 03:12:08.612307   19293 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0226 03:12:08.612355   19293 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0226 03:12:08.612418   19293 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0226 03:12:08.612458   19293 cache.go:56] Caching tarball of preloaded images
	I0226 03:12:08.612852   19293 preload.go:174] Found /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0226 03:12:08.613471   19293 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0226 03:12:08.614404   19293 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/config.json ...
	I0226 03:12:08.614461   19293 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/config.json: {Name:mk0764e3886768e608817eac8a6065fdeeeb75b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:12:08.664924   19293 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0226 03:12:08.664967   19293 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0226 03:12:08.664986   19293 cache.go:194] Successfully downloaded all kic artifacts
	I0226 03:12:08.665042   19293 start.go:365] acquiring machines lock for kubernetes-upgrade-909000: {Name:mkc9628b013413be98e96bd696a0ce1aaf371370 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0226 03:12:08.665197   19293 start.go:369] acquired machines lock for "kubernetes-upgrade-909000" in 142.301µs
	I0226 03:12:08.665226   19293 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-909000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-909000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0226 03:12:08.665301   19293 start.go:125] createHost starting for "" (driver="docker")
	I0226 03:12:08.686419   19293 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0226 03:12:08.686620   19293 start.go:159] libmachine.API.Create for "kubernetes-upgrade-909000" (driver="docker")
	I0226 03:12:08.686648   19293 client.go:168] LocalClient.Create starting
	I0226 03:12:08.686783   19293 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem
	I0226 03:12:08.686822   19293 main.go:141] libmachine: Decoding PEM data...
	I0226 03:12:08.686840   19293 main.go:141] libmachine: Parsing certificate...
	I0226 03:12:08.686896   19293 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/cert.pem
	I0226 03:12:08.686921   19293 main.go:141] libmachine: Decoding PEM data...
	I0226 03:12:08.686928   19293 main.go:141] libmachine: Parsing certificate...
	I0226 03:12:08.708154   19293 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-909000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0226 03:12:08.760215   19293 cli_runner.go:211] docker network inspect kubernetes-upgrade-909000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0226 03:12:08.760356   19293 network_create.go:281] running [docker network inspect kubernetes-upgrade-909000] to gather additional debugging logs...
	I0226 03:12:08.760379   19293 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-909000
	W0226 03:12:08.811548   19293 cli_runner.go:211] docker network inspect kubernetes-upgrade-909000 returned with exit code 1
	I0226 03:12:08.811585   19293 network_create.go:284] error running [docker network inspect kubernetes-upgrade-909000]: docker network inspect kubernetes-upgrade-909000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-909000 not found
	I0226 03:12:08.811605   19293 network_create.go:286] output of [docker network inspect kubernetes-upgrade-909000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-909000 not found
	
	** /stderr **
	I0226 03:12:08.811730   19293 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0226 03:12:08.864845   19293 network.go:210] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0226 03:12:08.865220   19293 network.go:207] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022f8a70}
	I0226 03:12:08.865235   19293 network_create.go:124] attempt to create docker network kubernetes-upgrade-909000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I0226 03:12:08.865306   19293 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-909000 kubernetes-upgrade-909000
	W0226 03:12:08.916204   19293 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-909000 kubernetes-upgrade-909000 returned with exit code 1
	W0226 03:12:08.916257   19293 network_create.go:149] failed to create docker network kubernetes-upgrade-909000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-909000 kubernetes-upgrade-909000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0226 03:12:08.916275   19293 network_create.go:116] failed to create docker network kubernetes-upgrade-909000 192.168.58.0/24, will retry: subnet is taken
	I0226 03:12:08.917882   19293 network.go:210] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0226 03:12:08.918242   19293 network.go:207] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022f98e0}
	I0226 03:12:08.918256   19293 network_create.go:124] attempt to create docker network kubernetes-upgrade-909000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0226 03:12:08.918324   19293 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-909000 kubernetes-upgrade-909000
	W0226 03:12:08.969299   19293 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-909000 kubernetes-upgrade-909000 returned with exit code 1
	W0226 03:12:08.969340   19293 network_create.go:149] failed to create docker network kubernetes-upgrade-909000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-909000 kubernetes-upgrade-909000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0226 03:12:08.969356   19293 network_create.go:116] failed to create docker network kubernetes-upgrade-909000 192.168.67.0/24, will retry: subnet is taken
	I0226 03:12:08.970699   19293 network.go:210] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0226 03:12:08.971055   19293 network.go:207] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002416750}
	I0226 03:12:08.971070   19293 network_create.go:124] attempt to create docker network kubernetes-upgrade-909000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0226 03:12:08.971135   19293 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-909000 kubernetes-upgrade-909000
	I0226 03:12:09.060472   19293 network_create.go:108] docker network kubernetes-upgrade-909000 192.168.76.0/24 created
	I0226 03:12:09.060515   19293 kic.go:121] calculated static IP "192.168.76.2" for the "kubernetes-upgrade-909000" container
	I0226 03:12:09.060638   19293 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0226 03:12:09.109395   19293 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-909000 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-909000 --label created_by.minikube.sigs.k8s.io=true
	I0226 03:12:09.161780   19293 oci.go:103] Successfully created a docker volume kubernetes-upgrade-909000
	I0226 03:12:09.161916   19293 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-909000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-909000 --entrypoint /usr/bin/test -v kubernetes-upgrade-909000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib
	I0226 03:12:09.679140   19293 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-909000
	I0226 03:12:09.679185   19293 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0226 03:12:09.679211   19293 kic.go:194] Starting extracting preloaded images to volume ...
	I0226 03:12:09.679410   19293 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-909000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir
	I0226 03:12:12.861852   19293 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-909000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir: (3.182358728s)
	I0226 03:12:12.861882   19293 kic.go:203] duration metric: took 3.182661 seconds to extract preloaded images to volume
	I0226 03:12:12.861998   19293 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0226 03:12:12.980407   19293 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-909000 --name kubernetes-upgrade-909000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-909000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-909000 --network kubernetes-upgrade-909000 --ip 192.168.76.2 --volume kubernetes-upgrade-909000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf
	I0226 03:12:13.343553   19293 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-909000 --format={{.State.Running}}
	I0226 03:12:13.400861   19293 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-909000 --format={{.State.Status}}
	I0226 03:12:13.461807   19293 cli_runner.go:164] Run: docker exec kubernetes-upgrade-909000 stat /var/lib/dpkg/alternatives/iptables
	I0226 03:12:13.591850   19293 oci.go:144] the created container "kubernetes-upgrade-909000" has a running status.
	I0226 03:12:13.591907   19293 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/kubernetes-upgrade-909000/id_rsa...
	I0226 03:12:13.763783   19293 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/kubernetes-upgrade-909000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0226 03:12:13.832843   19293 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-909000 --format={{.State.Status}}
	I0226 03:12:13.894603   19293 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0226 03:12:13.894626   19293 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-909000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0226 03:12:14.000228   19293 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-909000 --format={{.State.Status}}
	I0226 03:12:14.052973   19293 machine.go:88] provisioning docker machine ...
	I0226 03:12:14.053028   19293 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-909000"
	I0226 03:12:14.053122   19293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-909000
	I0226 03:12:14.106040   19293 main.go:141] libmachine: Using SSH client type: native
	I0226 03:12:14.106267   19293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd67b920] 0xd67e680 <nil>  [] 0s} 127.0.0.1 59876 <nil> <nil>}
	I0226 03:12:14.106281   19293 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-909000 && echo "kubernetes-upgrade-909000" | sudo tee /etc/hostname
	I0226 03:12:14.262361   19293 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-909000
	
	I0226 03:12:14.262462   19293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-909000
	I0226 03:12:14.314084   19293 main.go:141] libmachine: Using SSH client type: native
	I0226 03:12:14.314266   19293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd67b920] 0xd67e680 <nil>  [] 0s} 127.0.0.1 59876 <nil> <nil>}
	I0226 03:12:14.314280   19293 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-909000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-909000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-909000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0226 03:12:14.448009   19293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0226 03:12:14.448030   19293 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/18222-9538/.minikube CaCertPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18222-9538/.minikube}
	I0226 03:12:14.448057   19293 ubuntu.go:177] setting up certificates
	I0226 03:12:14.448064   19293 provision.go:83] configureAuth start
	I0226 03:12:14.448129   19293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-909000
	I0226 03:12:14.499447   19293 provision.go:138] copyHostCerts
	I0226 03:12:14.499535   19293 exec_runner.go:144] found /Users/jenkins/minikube-integration/18222-9538/.minikube/key.pem, removing ...
	I0226 03:12:14.499551   19293 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18222-9538/.minikube/key.pem
	I0226 03:12:14.499666   19293 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18222-9538/.minikube/key.pem (1675 bytes)
	I0226 03:12:14.499891   19293 exec_runner.go:144] found /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.pem, removing ...
	I0226 03:12:14.499898   19293 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.pem
	I0226 03:12:14.499969   19293 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.pem (1082 bytes)
	I0226 03:12:14.500143   19293 exec_runner.go:144] found /Users/jenkins/minikube-integration/18222-9538/.minikube/cert.pem, removing ...
	I0226 03:12:14.500150   19293 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18222-9538/.minikube/cert.pem
	I0226 03:12:14.500225   19293 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18222-9538/.minikube/cert.pem (1123 bytes)
	I0226 03:12:14.500384   19293 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-909000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-909000]
	I0226 03:12:14.564097   19293 provision.go:172] copyRemoteCerts
	I0226 03:12:14.564158   19293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0226 03:12:14.564213   19293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-909000
	I0226 03:12:14.614871   19293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59876 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/kubernetes-upgrade-909000/id_rsa Username:docker}
	I0226 03:12:14.717058   19293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0226 03:12:14.757968   19293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0226 03:12:14.797910   19293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0226 03:12:14.838093   19293 provision.go:86] duration metric: configureAuth took 390.009846ms
	I0226 03:12:14.838113   19293 ubuntu.go:193] setting minikube options for container-runtime
	I0226 03:12:14.838251   19293 config.go:182] Loaded profile config "kubernetes-upgrade-909000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0226 03:12:14.838329   19293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-909000
	I0226 03:12:14.889747   19293 main.go:141] libmachine: Using SSH client type: native
	I0226 03:12:14.889924   19293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd67b920] 0xd67e680 <nil>  [] 0s} 127.0.0.1 59876 <nil> <nil>}
	I0226 03:12:14.889937   19293 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0226 03:12:15.023922   19293 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0226 03:12:15.023936   19293 ubuntu.go:71] root file system type: overlay
	I0226 03:12:15.024041   19293 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0226 03:12:15.024131   19293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-909000
	I0226 03:12:15.074595   19293 main.go:141] libmachine: Using SSH client type: native
	I0226 03:12:15.074772   19293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd67b920] 0xd67e680 <nil>  [] 0s} 127.0.0.1 59876 <nil> <nil>}
	I0226 03:12:15.074821   19293 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0226 03:12:15.231944   19293 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0226 03:12:15.232050   19293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-909000
	I0226 03:12:15.283673   19293 main.go:141] libmachine: Using SSH client type: native
	I0226 03:12:15.283857   19293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd67b920] 0xd67e680 <nil>  [] 0s} 127.0.0.1 59876 <nil> <nil>}
	I0226 03:12:15.283869   19293 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0226 03:12:15.924710   19293 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-02-06 21:12:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-26 11:12:15.226311134 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0226 03:12:15.924738   19293 machine.go:91] provisioned docker machine in 1.871730039s
	I0226 03:12:15.924762   19293 client.go:171] LocalClient.Create took 7.238069202s
	I0226 03:12:15.924787   19293 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-909000" took 7.238133149s
	I0226 03:12:15.924796   19293 start.go:300] post-start starting for "kubernetes-upgrade-909000" (driver="docker")
	I0226 03:12:15.924808   19293 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0226 03:12:15.924867   19293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0226 03:12:15.924923   19293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-909000
	I0226 03:12:15.979384   19293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59876 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/kubernetes-upgrade-909000/id_rsa Username:docker}
	I0226 03:12:16.081272   19293 ssh_runner.go:195] Run: cat /etc/os-release
	I0226 03:12:16.085562   19293 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0226 03:12:16.085588   19293 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0226 03:12:16.085595   19293 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0226 03:12:16.085600   19293 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0226 03:12:16.085611   19293 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18222-9538/.minikube/addons for local assets ...
	I0226 03:12:16.085716   19293 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18222-9538/.minikube/files for local assets ...
	I0226 03:12:16.085910   19293 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem -> 100262.pem in /etc/ssl/certs
	I0226 03:12:16.086119   19293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0226 03:12:16.101118   19293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem --> /etc/ssl/certs/100262.pem (1708 bytes)
	I0226 03:12:16.141777   19293 start.go:303] post-start completed in 216.966766ms
	I0226 03:12:16.142419   19293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-909000
	I0226 03:12:16.195041   19293 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/config.json ...
	I0226 03:12:16.195509   19293 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0226 03:12:16.195579   19293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-909000
	I0226 03:12:16.248297   19293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59876 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/kubernetes-upgrade-909000/id_rsa Username:docker}
	I0226 03:12:16.341283   19293 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0226 03:12:16.346566   19293 start.go:128] duration metric: createHost completed in 7.681211922s
	I0226 03:12:16.346588   19293 start.go:83] releasing machines lock for "kubernetes-upgrade-909000", held for 7.68134691s
	I0226 03:12:16.346672   19293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-909000
	I0226 03:12:16.397883   19293 ssh_runner.go:195] Run: cat /version.json
	I0226 03:12:16.397924   19293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0226 03:12:16.397951   19293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-909000
	I0226 03:12:16.397995   19293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-909000
	I0226 03:12:16.455360   19293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59876 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/kubernetes-upgrade-909000/id_rsa Username:docker}
	I0226 03:12:16.455996   19293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59876 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/kubernetes-upgrade-909000/id_rsa Username:docker}
	I0226 03:12:16.644517   19293 ssh_runner.go:195] Run: systemctl --version
	I0226 03:12:16.649723   19293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0226 03:12:16.654772   19293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0226 03:12:16.696145   19293 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0226 03:12:16.696216   19293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0226 03:12:16.724012   19293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0226 03:12:16.751828   19293 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0226 03:12:16.751849   19293 start.go:475] detecting cgroup driver to use...
	I0226 03:12:16.751862   19293 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0226 03:12:16.751965   19293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0226 03:12:16.779087   19293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0226 03:12:16.795315   19293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0226 03:12:16.811101   19293 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0226 03:12:16.811162   19293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0226 03:12:16.827195   19293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0226 03:12:16.843082   19293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0226 03:12:16.858463   19293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0226 03:12:16.874695   19293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0226 03:12:16.890148   19293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0226 03:12:16.906015   19293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0226 03:12:16.920667   19293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0226 03:12:16.936233   19293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 03:12:16.993843   19293 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0226 03:12:17.083489   19293 start.go:475] detecting cgroup driver to use...
	I0226 03:12:17.083509   19293 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0226 03:12:17.083606   19293 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0226 03:12:17.101437   19293 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0226 03:12:17.101524   19293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0226 03:12:17.120479   19293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0226 03:12:17.155375   19293 ssh_runner.go:195] Run: which cri-dockerd
	I0226 03:12:17.160177   19293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0226 03:12:17.176324   19293 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0226 03:12:17.208828   19293 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0226 03:12:17.286684   19293 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0226 03:12:17.385758   19293 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0226 03:12:17.385836   19293 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0226 03:12:17.414435   19293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 03:12:17.477270   19293 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0226 03:12:17.760259   19293 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0226 03:12:17.787164   19293 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0226 03:12:17.834865   19293 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	I0226 03:12:17.834991   19293 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-909000 dig +short host.docker.internal
	I0226 03:12:17.966391   19293 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0226 03:12:17.966478   19293 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0226 03:12:17.971110   19293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 03:12:17.989338   19293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-909000
	I0226 03:12:18.043250   19293 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0226 03:12:18.043359   19293 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0226 03:12:18.064762   19293 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0226 03:12:18.064776   19293 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0226 03:12:18.064828   19293 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0226 03:12:18.080031   19293 ssh_runner.go:195] Run: which lz4
	I0226 03:12:18.084568   19293 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0226 03:12:18.088969   19293 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0226 03:12:18.089004   19293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0226 03:12:24.348018   19293 docker.go:649] Took 6.263496 seconds to copy over tarball
	I0226 03:12:24.348107   19293 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0226 03:12:25.885365   19293 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.537231478s)
	I0226 03:12:25.885397   19293 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0226 03:12:25.935373   19293 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0226 03:12:25.950801   19293 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0226 03:12:25.980111   19293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 03:12:26.044768   19293 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0226 03:12:26.500164   19293 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0226 03:12:26.519009   19293 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0226 03:12:26.519023   19293 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0226 03:12:26.519030   19293 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0226 03:12:26.523720   19293 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0226 03:12:26.523953   19293 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0226 03:12:26.524144   19293 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0226 03:12:26.524334   19293 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0226 03:12:26.524842   19293 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0226 03:12:26.524899   19293 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 03:12:26.524899   19293 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0226 03:12:26.525019   19293 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 03:12:26.528621   19293 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0226 03:12:26.528993   19293 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0226 03:12:26.529066   19293 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0226 03:12:26.530243   19293 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 03:12:26.530650   19293 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0226 03:12:26.530660   19293 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0226 03:12:26.530659   19293 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0226 03:12:26.530703   19293 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 03:12:28.490090   19293 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0226 03:12:28.509345   19293 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0226 03:12:28.509399   19293 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0226 03:12:28.509464   19293 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0226 03:12:28.527087   19293 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0226 03:12:28.578845   19293 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 03:12:28.598407   19293 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0226 03:12:28.598432   19293 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 03:12:28.598507   19293 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 03:12:28.616542   19293 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0226 03:12:28.622933   19293 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0226 03:12:28.640756   19293 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0226 03:12:28.640781   19293 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0226 03:12:28.640846   19293 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0226 03:12:28.658593   19293 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0226 03:12:29.154492   19293 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0226 03:12:29.162407   19293 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0226 03:12:29.174344   19293 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0226 03:12:29.174372   19293 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0226 03:12:29.174444   19293 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0226 03:12:29.182741   19293 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0226 03:12:29.182770   19293 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0226 03:12:29.182833   19293 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0226 03:12:29.191226   19293 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0226 03:12:29.193966   19293 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0226 03:12:29.202109   19293 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0226 03:12:29.206802   19293 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0226 03:12:29.209757   19293 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0226 03:12:29.209781   19293 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0226 03:12:29.209847   19293 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0226 03:12:29.225897   19293 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0226 03:12:29.225924   19293 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0226 03:12:29.225986   19293 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0226 03:12:29.227286   19293 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0226 03:12:29.243407   19293 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0226 03:12:29.353128   19293 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 03:12:29.372482   19293 cache_images.go:92] LoadImages completed in 2.853425941s
	W0226 03:12:29.372536   19293 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	I0226 03:12:29.372614   19293 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0226 03:12:29.423636   19293 cni.go:84] Creating CNI manager for ""
	I0226 03:12:29.423656   19293 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0226 03:12:29.423671   19293 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0226 03:12:29.423684   19293 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-909000 NodeName:kubernetes-upgrade-909000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0226 03:12:29.423777   19293 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-909000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-909000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0226 03:12:29.423824   19293 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-909000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-909000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0226 03:12:29.423886   19293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0226 03:12:29.438605   19293 binaries.go:44] Found k8s binaries, skipping transfer
	I0226 03:12:29.438668   19293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0226 03:12:29.452789   19293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I0226 03:12:29.482597   19293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0226 03:12:29.511144   19293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2180 bytes)
	I0226 03:12:29.539321   19293 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0226 03:12:29.543541   19293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 03:12:29.560832   19293 certs.go:56] Setting up /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000 for IP: 192.168.76.2
	I0226 03:12:29.560863   19293 certs.go:190] acquiring lock for shared ca certs: {Name:mkac1efdcc7c5f1039385f86b148562f7ea05475 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:12:29.561036   19293 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.key
	I0226 03:12:29.561102   19293 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18222-9538/.minikube/proxy-client-ca.key
	I0226 03:12:29.561156   19293 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/client.key
	I0226 03:12:29.561170   19293 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/client.crt with IP's: []
	I0226 03:12:29.651158   19293 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/client.crt ...
	I0226 03:12:29.651174   19293 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/client.crt: {Name:mkd138989d074a79f8f50dd5bf8c4214c0664704 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:12:29.651477   19293 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/client.key ...
	I0226 03:12:29.651486   19293 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/client.key: {Name:mkae3dc79afd8df48008c2926dcd705a0fe90586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:12:29.651696   19293 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/apiserver.key.31bdca25
	I0226 03:12:29.651710   19293 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0226 03:12:29.874112   19293 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/apiserver.crt.31bdca25 ...
	I0226 03:12:29.874137   19293 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/apiserver.crt.31bdca25: {Name:mkffac6128b3e292040d170491d8b295e63f5b76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:12:29.874428   19293 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/apiserver.key.31bdca25 ...
	I0226 03:12:29.874437   19293 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/apiserver.key.31bdca25: {Name:mkbb80e34c4bdf87e1f51e7a81d2de9b2da558a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:12:29.874631   19293 certs.go:337] copying /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/apiserver.crt
	I0226 03:12:29.874829   19293 certs.go:341] copying /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/apiserver.key
	I0226 03:12:29.875027   19293 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/proxy-client.key
	I0226 03:12:29.875042   19293 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/proxy-client.crt with IP's: []
	I0226 03:12:30.017304   19293 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/proxy-client.crt ...
	I0226 03:12:30.017327   19293 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/proxy-client.crt: {Name:mk7770f40698f3adfa655080e14e91dadc704b5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:12:30.017586   19293 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/proxy-client.key ...
	I0226 03:12:30.017598   19293 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/proxy-client.key: {Name:mkcbda67687f52df6dc286873064c35e132015c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:12:30.017980   19293 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/10026.pem (1338 bytes)
	W0226 03:12:30.018031   19293 certs.go:433] ignoring /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/10026_empty.pem, impossibly tiny 0 bytes
	I0226 03:12:30.018043   19293 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca-key.pem (1675 bytes)
	I0226 03:12:30.018085   19293 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem (1082 bytes)
	I0226 03:12:30.018124   19293 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/cert.pem (1123 bytes)
	I0226 03:12:30.018164   19293 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/key.pem (1675 bytes)
	I0226 03:12:30.018245   19293 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem (1708 bytes)
	I0226 03:12:30.018738   19293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0226 03:12:30.059063   19293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0226 03:12:30.100141   19293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0226 03:12:30.141148   19293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0226 03:12:30.181596   19293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0226 03:12:30.222274   19293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0226 03:12:30.261046   19293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0226 03:12:30.300896   19293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0226 03:12:30.340898   19293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem --> /usr/share/ca-certificates/100262.pem (1708 bytes)
	I0226 03:12:30.384184   19293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0226 03:12:30.425109   19293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/10026.pem --> /usr/share/ca-certificates/10026.pem (1338 bytes)
	I0226 03:12:30.464796   19293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0226 03:12:30.493330   19293 ssh_runner.go:195] Run: openssl version
	I0226 03:12:30.499859   19293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100262.pem && ln -fs /usr/share/ca-certificates/100262.pem /etc/ssl/certs/100262.pem"
	I0226 03:12:30.515749   19293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100262.pem
	I0226 03:12:30.519959   19293 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 26 10:36 /usr/share/ca-certificates/100262.pem
	I0226 03:12:30.520007   19293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100262.pem
	I0226 03:12:30.527251   19293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100262.pem /etc/ssl/certs/3ec20f2e.0"
	I0226 03:12:30.543297   19293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0226 03:12:30.559275   19293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0226 03:12:30.563829   19293 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 26 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0226 03:12:30.563880   19293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0226 03:12:30.570674   19293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0226 03:12:30.586156   19293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10026.pem && ln -fs /usr/share/ca-certificates/10026.pem /etc/ssl/certs/10026.pem"
	I0226 03:12:30.601976   19293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10026.pem
	I0226 03:12:30.606157   19293 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 26 10:36 /usr/share/ca-certificates/10026.pem
	I0226 03:12:30.606214   19293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10026.pem
	I0226 03:12:30.613123   19293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10026.pem /etc/ssl/certs/51391683.0"
	I0226 03:12:30.628782   19293 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0226 03:12:30.632938   19293 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0226 03:12:30.632990   19293 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-909000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-909000 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 03:12:30.633111   19293 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0226 03:12:30.652721   19293 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0226 03:12:30.668010   19293 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0226 03:12:30.683007   19293 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0226 03:12:30.683070   19293 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0226 03:12:30.698748   19293 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0226 03:12:30.698779   19293 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0226 03:12:30.758530   19293 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0226 03:12:30.758585   19293 kubeadm.go:322] [preflight] Running pre-flight checks
	I0226 03:12:31.018973   19293 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0226 03:12:31.019065   19293 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0226 03:12:31.019158   19293 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0226 03:12:31.212845   19293 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0226 03:12:31.214180   19293 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0226 03:12:31.220379   19293 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0226 03:12:31.282200   19293 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0226 03:12:31.307001   19293 out.go:204]   - Generating certificates and keys ...
	I0226 03:12:31.307075   19293 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0226 03:12:31.307147   19293 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0226 03:12:31.431631   19293 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0226 03:12:31.752691   19293 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0226 03:12:31.843158   19293 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0226 03:12:31.943642   19293 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0226 03:12:31.999371   19293 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0226 03:12:31.999489   19293 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-909000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0226 03:12:32.178975   19293 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0226 03:12:32.179087   19293 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-909000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0226 03:12:32.305692   19293 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0226 03:12:32.545581   19293 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0226 03:12:32.693771   19293 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0226 03:12:32.693838   19293 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0226 03:12:32.877218   19293 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0226 03:12:33.077805   19293 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0226 03:12:33.180231   19293 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0226 03:12:33.467929   19293 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0226 03:12:33.468413   19293 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0226 03:12:33.489832   19293 out.go:204]   - Booting up control plane ...
	I0226 03:12:33.489902   19293 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0226 03:12:33.489962   19293 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0226 03:12:33.490017   19293 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0226 03:12:33.490078   19293 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0226 03:12:33.490197   19293 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0226 03:13:13.477851   19293 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0226 03:13:13.478456   19293 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:13:13.478599   19293 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:13:18.480226   19293 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:13:18.480384   19293 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:13:28.482501   19293 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:13:28.482646   19293 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:13:48.484258   19293 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:13:48.484418   19293 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:14:28.486375   19293 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:14:28.486574   19293 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:14:28.486584   19293 kubeadm.go:322] 
	I0226 03:14:28.486615   19293 kubeadm.go:322] Unfortunately, an error has occurred:
	I0226 03:14:28.486651   19293 kubeadm.go:322] 	timed out waiting for the condition
	I0226 03:14:28.486662   19293 kubeadm.go:322] 
	I0226 03:14:28.486684   19293 kubeadm.go:322] This error is likely caused by:
	I0226 03:14:28.486708   19293 kubeadm.go:322] 	- The kubelet is not running
	I0226 03:14:28.486809   19293 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0226 03:14:28.486820   19293 kubeadm.go:322] 
	I0226 03:14:28.486897   19293 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0226 03:14:28.486929   19293 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0226 03:14:28.486958   19293 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0226 03:14:28.486975   19293 kubeadm.go:322] 
	I0226 03:14:28.487080   19293 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0226 03:14:28.487154   19293 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0226 03:14:28.487227   19293 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0226 03:14:28.487269   19293 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0226 03:14:28.487323   19293 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0226 03:14:28.487353   19293 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0226 03:14:28.491558   19293 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0226 03:14:28.491625   19293 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0226 03:14:28.491728   19293 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0226 03:14:28.491815   19293 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0226 03:14:28.491885   19293 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0226 03:14:28.491958   19293 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0226 03:14:28.492022   19293 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-909000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-909000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-909000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-909000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0226 03:14:28.492053   19293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0226 03:14:28.911390   19293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 03:14:28.928599   19293 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0226 03:14:28.928654   19293 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0226 03:14:28.943480   19293 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0226 03:14:28.943502   19293 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0226 03:14:28.995188   19293 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0226 03:14:28.995233   19293 kubeadm.go:322] [preflight] Running pre-flight checks
	I0226 03:14:29.224927   19293 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0226 03:14:29.225049   19293 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0226 03:14:29.225142   19293 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0226 03:14:29.378832   19293 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0226 03:14:29.379796   19293 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0226 03:14:29.386073   19293 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0226 03:14:29.448545   19293 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0226 03:14:29.469991   19293 out.go:204]   - Generating certificates and keys ...
	I0226 03:14:29.470082   19293 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0226 03:14:29.470146   19293 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0226 03:14:29.470202   19293 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0226 03:14:29.470263   19293 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0226 03:14:29.470327   19293 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0226 03:14:29.470381   19293 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0226 03:14:29.470435   19293 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0226 03:14:29.470493   19293 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0226 03:14:29.470563   19293 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0226 03:14:29.470629   19293 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0226 03:14:29.470663   19293 kubeadm.go:322] [certs] Using the existing "sa" key
	I0226 03:14:29.470704   19293 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0226 03:14:29.506625   19293 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0226 03:14:29.583535   19293 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0226 03:14:29.637012   19293 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0226 03:14:29.803464   19293 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0226 03:14:29.803949   19293 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0226 03:14:29.826329   19293 out.go:204]   - Booting up control plane ...
	I0226 03:14:29.826484   19293 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0226 03:14:29.826618   19293 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0226 03:14:29.826722   19293 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0226 03:14:29.826861   19293 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0226 03:14:29.827100   19293 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0226 03:15:09.812454   19293 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0226 03:15:09.813383   19293 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:15:09.813562   19293 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:15:14.814555   19293 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:15:14.814727   19293 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:15:24.816595   19293 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:15:24.816754   19293 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:15:44.817477   19293 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:15:44.817627   19293 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:16:24.818854   19293 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:16:24.819015   19293 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:16:24.819027   19293 kubeadm.go:322] 
	I0226 03:16:24.819061   19293 kubeadm.go:322] Unfortunately, an error has occurred:
	I0226 03:16:24.819112   19293 kubeadm.go:322] 	timed out waiting for the condition
	I0226 03:16:24.819125   19293 kubeadm.go:322] 
	I0226 03:16:24.819156   19293 kubeadm.go:322] This error is likely caused by:
	I0226 03:16:24.819180   19293 kubeadm.go:322] 	- The kubelet is not running
	I0226 03:16:24.819268   19293 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0226 03:16:24.819275   19293 kubeadm.go:322] 
	I0226 03:16:24.819355   19293 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0226 03:16:24.819401   19293 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0226 03:16:24.819468   19293 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0226 03:16:24.819483   19293 kubeadm.go:322] 
	I0226 03:16:24.819619   19293 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0226 03:16:24.819740   19293 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0226 03:16:24.819816   19293 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0226 03:16:24.819856   19293 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0226 03:16:24.819913   19293 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0226 03:16:24.819939   19293 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0226 03:16:24.824142   19293 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0226 03:16:24.824224   19293 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0226 03:16:24.824338   19293 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0226 03:16:24.824472   19293 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0226 03:16:24.824612   19293 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0226 03:16:24.824722   19293 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0226 03:16:24.824775   19293 kubeadm.go:406] StartCluster complete in 3m54.190694542s
	I0226 03:16:24.824880   19293 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:16:24.843980   19293 logs.go:276] 0 containers: []
	W0226 03:16:24.843995   19293 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:16:24.844061   19293 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:16:24.861402   19293 logs.go:276] 0 containers: []
	W0226 03:16:24.861417   19293 logs.go:278] No container was found matching "etcd"
	I0226 03:16:24.861495   19293 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:16:24.879506   19293 logs.go:276] 0 containers: []
	W0226 03:16:24.879520   19293 logs.go:278] No container was found matching "coredns"
	I0226 03:16:24.879591   19293 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:16:24.897438   19293 logs.go:276] 0 containers: []
	W0226 03:16:24.897470   19293 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:16:24.897548   19293 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:16:24.914327   19293 logs.go:276] 0 containers: []
	W0226 03:16:24.914343   19293 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:16:24.914414   19293 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:16:24.932341   19293 logs.go:276] 0 containers: []
	W0226 03:16:24.932357   19293 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:16:24.932424   19293 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:16:24.950531   19293 logs.go:276] 0 containers: []
	W0226 03:16:24.950547   19293 logs.go:278] No container was found matching "kindnet"
	I0226 03:16:24.950557   19293 logs.go:123] Gathering logs for kubelet ...
	I0226 03:16:24.950568   19293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:16:24.991790   19293 logs.go:123] Gathering logs for dmesg ...
	I0226 03:16:24.991808   19293 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:16:25.012216   19293 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:16:25.012232   19293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:16:25.106943   19293 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:16:25.106955   19293 logs.go:123] Gathering logs for Docker ...
	I0226 03:16:25.106964   19293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:16:25.128929   19293 logs.go:123] Gathering logs for container status ...
	I0226 03:16:25.128945   19293 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0226 03:16:25.190016   19293 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0226 03:16:25.190036   19293 out.go:239] * 
	* 
	W0226 03:16:25.190074   19293 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0226 03:16:25.190086   19293 out.go:239] * 
	* 
	W0226 03:16:25.190702   19293 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0226 03:16:25.254643   19293 out.go:177] 
	W0226 03:16:25.297377   19293 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0226 03:16:25.297440   19293 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0226 03:16:25.297467   19293 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0226 03:16:25.339382   19293 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-909000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-909000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-909000: (1.527365187s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-909000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-909000 status --format={{.Host}}: exit status 7 (115.676868ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-909000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:243: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-909000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker : (1m47.55577697s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-909000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-909000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-909000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (384.541262ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-909000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18222
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18222-9538/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18222-9538/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-909000
	    minikube start -p kubernetes-upgrade-909000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9090002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-909000 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-909000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker 
E0226 03:18:32.520689   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-909000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker : (31.170093906s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-02-26 03:18:46.263932 -0800 PST m=+3049.034741268
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-909000
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-909000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0f9f6d6215095cbb99798a547cf3d51a5cb2a052ecd603abe5b7aee1f018052a",
	        "Created": "2024-02-26T11:12:13.036714784Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 252879,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-26T11:16:28.298153812Z",
	            "FinishedAt": "2024-02-26T11:16:25.874113127Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/0f9f6d6215095cbb99798a547cf3d51a5cb2a052ecd603abe5b7aee1f018052a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0f9f6d6215095cbb99798a547cf3d51a5cb2a052ecd603abe5b7aee1f018052a/hostname",
	        "HostsPath": "/var/lib/docker/containers/0f9f6d6215095cbb99798a547cf3d51a5cb2a052ecd603abe5b7aee1f018052a/hosts",
	        "LogPath": "/var/lib/docker/containers/0f9f6d6215095cbb99798a547cf3d51a5cb2a052ecd603abe5b7aee1f018052a/0f9f6d6215095cbb99798a547cf3d51a5cb2a052ecd603abe5b7aee1f018052a-json.log",
	        "Name": "/kubernetes-upgrade-909000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-909000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-909000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3a32895242c39f3fac2ff2e89222975c28182519bb92b6d8827e5253bbd88975-init/diff:/var/lib/docker/overlay2/8bb839173c154892efba77c6399a35a6f861ea09086927d7a3ace9b08c2c0425/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3a32895242c39f3fac2ff2e89222975c28182519bb92b6d8827e5253bbd88975/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3a32895242c39f3fac2ff2e89222975c28182519bb92b6d8827e5253bbd88975/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3a32895242c39f3fac2ff2e89222975c28182519bb92b6d8827e5253bbd88975/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-909000",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-909000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-909000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-909000",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-909000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a8ea5c3afc42ba5359d8323bae7bd1fff567d80c571bb3a4780e2350f772eb4e",
	            "SandboxKey": "/var/run/docker/netns/a8ea5c3afc42",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60175"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60176"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60177"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60178"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60179"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-909000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0f9f6d621509",
	                        "kubernetes-upgrade-909000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "379570235dbafcaaa6384f05b3632ed5c76182eebf5b1d91f177162bfd838911",
	                    "EndpointID": "07f1b8f848b8c2ef3f612224efdf789ad69e7cd93287143d1fbebac8338054d6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "kubernetes-upgrade-909000",
	                        "0f9f6d621509"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-909000 -n kubernetes-upgrade-909000
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-909000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-909000 logs -n 25: (2.814112838s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | stopped-upgrade-590000 stop       | minikube                  | jenkins | v1.26.0 | 26 Feb 24 03:14 PST | 26 Feb 24 03:14 PST |
	| start   | -p stopped-upgrade-590000         | stopped-upgrade-590000    | jenkins | v1.32.0 | 26 Feb 24 03:14 PST | 26 Feb 24 03:15 PST |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-590000         | stopped-upgrade-590000    | jenkins | v1.32.0 | 26 Feb 24 03:15 PST | 26 Feb 24 03:15 PST |
	| start   | -p pause-543000 --memory=2048     | pause-543000              | jenkins | v1.32.0 | 26 Feb 24 03:15 PST | 26 Feb 24 03:15 PST |
	|         | --install-addons=false            |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker        |                           |         |         |                     |                     |
	| start   | -p pause-543000                   | pause-543000              | jenkins | v1.32.0 | 26 Feb 24 03:15 PST | 26 Feb 24 03:16 PST |
	|         | --alsologtostderr -v=1            |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-909000      | kubernetes-upgrade-909000 | jenkins | v1.32.0 | 26 Feb 24 03:16 PST | 26 Feb 24 03:16 PST |
	| start   | -p kubernetes-upgrade-909000      | kubernetes-upgrade-909000 | jenkins | v1.32.0 | 26 Feb 24 03:16 PST | 26 Feb 24 03:18 PST |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	| pause   | -p pause-543000                   | pause-543000              | jenkins | v1.32.0 | 26 Feb 24 03:16 PST | 26 Feb 24 03:16 PST |
	|         | --alsologtostderr -v=5            |                           |         |         |                     |                     |
	| unpause | -p pause-543000                   | pause-543000              | jenkins | v1.32.0 | 26 Feb 24 03:16 PST | 26 Feb 24 03:16 PST |
	|         | --alsologtostderr -v=5            |                           |         |         |                     |                     |
	| pause   | -p pause-543000                   | pause-543000              | jenkins | v1.32.0 | 26 Feb 24 03:16 PST | 26 Feb 24 03:16 PST |
	|         | --alsologtostderr -v=5            |                           |         |         |                     |                     |
	| delete  | -p pause-543000                   | pause-543000              | jenkins | v1.32.0 | 26 Feb 24 03:16 PST | 26 Feb 24 03:16 PST |
	|         | --alsologtostderr -v=5            |                           |         |         |                     |                     |
	| delete  | -p pause-543000                   | pause-543000              | jenkins | v1.32.0 | 26 Feb 24 03:16 PST | 26 Feb 24 03:16 PST |
	| start   | -p NoKubernetes-970000            | NoKubernetes-970000       | jenkins | v1.32.0 | 26 Feb 24 03:16 PST |                     |
	|         | --no-kubernetes                   |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20         |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-970000            | NoKubernetes-970000       | jenkins | v1.32.0 | 26 Feb 24 03:16 PST | 26 Feb 24 03:17 PST |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-970000            | NoKubernetes-970000       | jenkins | v1.32.0 | 26 Feb 24 03:17 PST | 26 Feb 24 03:17 PST |
	|         | --no-kubernetes                   |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-970000            | NoKubernetes-970000       | jenkins | v1.32.0 | 26 Feb 24 03:17 PST | 26 Feb 24 03:17 PST |
	| start   | -p NoKubernetes-970000            | NoKubernetes-970000       | jenkins | v1.32.0 | 26 Feb 24 03:17 PST | 26 Feb 24 03:17 PST |
	|         | --no-kubernetes                   |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-970000 sudo       | NoKubernetes-970000       | jenkins | v1.32.0 | 26 Feb 24 03:17 PST |                     |
	|         | systemctl is-active --quiet       |                           |         |         |                     |                     |
	|         | service kubelet                   |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-970000            | NoKubernetes-970000       | jenkins | v1.32.0 | 26 Feb 24 03:17 PST | 26 Feb 24 03:17 PST |
	| start   | -p NoKubernetes-970000            | NoKubernetes-970000       | jenkins | v1.32.0 | 26 Feb 24 03:17 PST | 26 Feb 24 03:18 PST |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-970000 sudo       | NoKubernetes-970000       | jenkins | v1.32.0 | 26 Feb 24 03:18 PST |                     |
	|         | systemctl is-active --quiet       |                           |         |         |                     |                     |
	|         | service kubelet                   |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-970000            | NoKubernetes-970000       | jenkins | v1.32.0 | 26 Feb 24 03:18 PST | 26 Feb 24 03:18 PST |
	| start   | -p auto-722000 --memory=3072      | auto-722000               | jenkins | v1.32.0 | 26 Feb 24 03:18 PST |                     |
	|         | --alsologtostderr --wait=true     |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-909000      | kubernetes-upgrade-909000 | jenkins | v1.32.0 | 26 Feb 24 03:18 PST |                     |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-909000      | kubernetes-upgrade-909000 | jenkins | v1.32.0 | 26 Feb 24 03:18 PST | 26 Feb 24 03:18 PST |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|---------|-----------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/26 03:18:15
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0226 03:18:15.155147   21184 out.go:291] Setting OutFile to fd 1 ...
	I0226 03:18:15.155408   21184 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 03:18:15.155413   21184 out.go:304] Setting ErrFile to fd 2...
	I0226 03:18:15.155417   21184 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 03:18:15.155610   21184 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18222-9538/.minikube/bin
	I0226 03:18:15.157155   21184 out.go:298] Setting JSON to false
	I0226 03:18:15.180715   21184 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":11866,"bootTime":1708934429,"procs":440,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0226 03:18:15.180845   21184 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0226 03:18:15.204223   21184 out.go:177] * [kubernetes-upgrade-909000] minikube v1.32.0 on Darwin 14.3.1
	I0226 03:18:15.245108   21184 out.go:177]   - MINIKUBE_LOCATION=18222
	I0226 03:18:15.245123   21184 notify.go:220] Checking for updates...
	I0226 03:18:15.287109   21184 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18222-9538/kubeconfig
	I0226 03:18:15.308094   21184 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0226 03:18:15.329361   21184 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 03:18:15.350181   21184 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18222-9538/.minikube
	I0226 03:18:15.392011   21184 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0226 03:18:15.413496   21184 config.go:182] Loaded profile config "kubernetes-upgrade-909000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0226 03:18:15.413935   21184 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 03:18:15.473827   21184 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0226 03:18:15.473984   21184 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 03:18:15.591146   21184 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:89 OomKillDisable:false NGoroutines:120 SystemTime:2024-02-26 11:18:15.578942742 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0226 03:18:13.755157   21056 cni.go:84] Creating CNI manager for ""
	I0226 03:18:13.785938   21056 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0226 03:18:13.785998   21056 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0226 03:18:13.786043   21056 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-722000 NodeName:auto-722000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0226 03:18:13.786283   21056 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "auto-722000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0226 03:18:13.786429   21056 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=auto-722000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:auto-722000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0226 03:18:13.786570   21056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0226 03:18:13.806227   21056 binaries.go:44] Found k8s binaries, skipping transfer
	I0226 03:18:13.806302   21056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0226 03:18:13.821559   21056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0226 03:18:13.853813   21056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0226 03:18:13.895177   21056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0226 03:18:13.931761   21056 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0226 03:18:13.936298   21056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 03:18:13.956269   21056 certs.go:56] Setting up /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000 for IP: 192.168.67.2
	I0226 03:18:13.956292   21056 certs.go:190] acquiring lock for shared ca certs: {Name:mkac1efdcc7c5f1039385f86b148562f7ea05475 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:18:13.956552   21056 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.key
	I0226 03:18:13.956692   21056 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18222-9538/.minikube/proxy-client-ca.key
	I0226 03:18:13.956758   21056 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/client.key
	I0226 03:18:13.956775   21056 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/client.crt with IP's: []
	I0226 03:18:14.144429   21056 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/client.crt ...
	I0226 03:18:14.144446   21056 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/client.crt: {Name:mk0e9cd96f784e026d63e5e5d10417e2fde8ebbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:18:14.144830   21056 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/client.key ...
	I0226 03:18:14.144839   21056 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/client.key: {Name:mk3d17875a9805cc9f7679dac696768361d6a5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:18:14.145086   21056 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/apiserver.key.c7fa3a9e
	I0226 03:18:14.145102   21056 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0226 03:18:14.234891   21056 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/apiserver.crt.c7fa3a9e ...
	I0226 03:18:14.234905   21056 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/apiserver.crt.c7fa3a9e: {Name:mk1930ff57b46b61136aaf209b8f43c95f58bc31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:18:14.235212   21056 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/apiserver.key.c7fa3a9e ...
	I0226 03:18:14.235224   21056 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/apiserver.key.c7fa3a9e: {Name:mk1eded9448b7431b24f6763981e1e10c8a33440 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:18:14.235426   21056 certs.go:337] copying /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/apiserver.crt
	I0226 03:18:14.235603   21056 certs.go:341] copying /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/apiserver.key
	I0226 03:18:14.235781   21056 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/proxy-client.key
	I0226 03:18:14.235794   21056 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/proxy-client.crt with IP's: []
	I0226 03:18:14.298253   21056 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/proxy-client.crt ...
	I0226 03:18:14.298270   21056 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/proxy-client.crt: {Name:mkd6523fea7a515e8afac0da9ac2063fcedfee67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:18:14.298777   21056 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/proxy-client.key ...
	I0226 03:18:14.298788   21056 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/proxy-client.key: {Name:mk92a587fdf64ab537b376da88cb828fe931f465 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:18:14.299270   21056 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/10026.pem (1338 bytes)
	W0226 03:18:14.299325   21056 certs.go:433] ignoring /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/10026_empty.pem, impossibly tiny 0 bytes
	I0226 03:18:14.299335   21056 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca-key.pem (1675 bytes)
	I0226 03:18:14.299383   21056 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem (1082 bytes)
	I0226 03:18:14.299438   21056 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/cert.pem (1123 bytes)
	I0226 03:18:14.299475   21056 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/key.pem (1675 bytes)
	I0226 03:18:14.299552   21056 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem (1708 bytes)
	I0226 03:18:14.300233   21056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0226 03:18:14.343897   21056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0226 03:18:14.388452   21056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0226 03:18:14.430856   21056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0226 03:18:14.471145   21056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0226 03:18:14.513413   21056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0226 03:18:14.556939   21056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0226 03:18:14.597947   21056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0226 03:18:14.638221   21056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem --> /usr/share/ca-certificates/100262.pem (1708 bytes)
	I0226 03:18:14.691938   21056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0226 03:18:14.734911   21056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/10026.pem --> /usr/share/ca-certificates/10026.pem (1338 bytes)
	I0226 03:18:14.777759   21056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0226 03:18:14.807167   21056 ssh_runner.go:195] Run: openssl version
	I0226 03:18:14.813007   21056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100262.pem && ln -fs /usr/share/ca-certificates/100262.pem /etc/ssl/certs/100262.pem"
	I0226 03:18:14.828829   21056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100262.pem
	I0226 03:18:14.833095   21056 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 26 10:36 /usr/share/ca-certificates/100262.pem
	I0226 03:18:14.833155   21056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100262.pem
	I0226 03:18:14.840319   21056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100262.pem /etc/ssl/certs/3ec20f2e.0"
	I0226 03:18:14.856088   21056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0226 03:18:14.872008   21056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0226 03:18:14.876208   21056 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 26 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0226 03:18:14.876254   21056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0226 03:18:14.882973   21056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0226 03:18:14.898820   21056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10026.pem && ln -fs /usr/share/ca-certificates/10026.pem /etc/ssl/certs/10026.pem"
	I0226 03:18:14.914446   21056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10026.pem
	I0226 03:18:14.918727   21056 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 26 10:36 /usr/share/ca-certificates/10026.pem
	I0226 03:18:14.918778   21056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10026.pem
	I0226 03:18:14.925309   21056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10026.pem /etc/ssl/certs/51391683.0"
	I0226 03:18:14.941048   21056 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0226 03:18:14.945248   21056 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0226 03:18:14.945295   21056 kubeadm.go:404] StartCluster: {Name:auto-722000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-722000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 03:18:14.945396   21056 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0226 03:18:14.966108   21056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0226 03:18:14.980877   21056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0226 03:18:14.995931   21056 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0226 03:18:14.995987   21056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0226 03:18:15.010883   21056 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0226 03:18:15.010911   21056 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0226 03:18:15.097014   21056 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0226 03:18:15.097068   21056 kubeadm.go:322] [preflight] Running pre-flight checks
	I0226 03:18:15.279185   21056 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0226 03:18:15.279283   21056 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0226 03:18:15.279392   21056 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0226 03:18:15.609297   21056 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0226 03:18:15.633407   21184 out.go:177] * Using the docker driver based on existing profile
	I0226 03:18:15.696493   21184 start.go:299] selected driver: docker
	I0226 03:18:15.696503   21184 start.go:903] validating driver "docker" against &{Name:kubernetes-upgrade-909000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-909000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 03:18:15.696560   21184 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0226 03:18:15.699850   21184 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 03:18:15.806624   21184 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:89 OomKillDisable:false NGoroutines:120 SystemTime:2024-02-26 11:18:15.796211862 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0226 03:18:15.806901   21184 cni.go:84] Creating CNI manager for ""
	I0226 03:18:15.806919   21184 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0226 03:18:15.806930   21184 start_flags.go:323] config:
	{Name:kubernetes-upgrade-909000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-909000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 03:18:15.848987   21184 out.go:177] * Starting control plane node kubernetes-upgrade-909000 in cluster kubernetes-upgrade-909000
	I0226 03:18:15.870104   21184 cache.go:121] Beginning downloading kic base image for docker with docker
	I0226 03:18:15.891063   21184 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0226 03:18:15.911986   21184 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0226 03:18:15.912013   21184 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0226 03:18:15.912029   21184 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0226 03:18:15.912043   21184 cache.go:56] Caching tarball of preloaded images
	I0226 03:18:15.912168   21184 preload.go:174] Found /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0226 03:18:15.912178   21184 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0226 03:18:15.912718   21184 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/config.json ...
	I0226 03:18:15.963924   21184 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0226 03:18:15.963949   21184 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0226 03:18:15.963966   21184 cache.go:194] Successfully downloaded all kic artifacts
	I0226 03:18:15.964003   21184 start.go:365] acquiring machines lock for kubernetes-upgrade-909000: {Name:mkc9628b013413be98e96bd696a0ce1aaf371370 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0226 03:18:15.964085   21184 start.go:369] acquired machines lock for "kubernetes-upgrade-909000" in 64.928µs
	I0226 03:18:15.964108   21184 start.go:96] Skipping create...Using existing machine configuration
	I0226 03:18:15.964118   21184 fix.go:54] fixHost starting: 
	I0226 03:18:15.964363   21184 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-909000 --format={{.State.Status}}
	I0226 03:18:16.016572   21184 fix.go:102] recreateIfNeeded on kubernetes-upgrade-909000: state=Running err=<nil>
	W0226 03:18:16.016604   21184 fix.go:128] unexpected machine state, will restart: <nil>
	I0226 03:18:16.037604   21184 out.go:177] * Updating the running docker "kubernetes-upgrade-909000" container ...
	I0226 03:18:15.675440   21056 out.go:204]   - Generating certificates and keys ...
	I0226 03:18:15.675503   21056 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0226 03:18:15.675584   21056 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0226 03:18:15.831985   21056 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0226 03:18:15.874253   21056 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0226 03:18:16.012504   21056 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0226 03:18:16.469603   21056 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0226 03:18:16.531290   21056 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0226 03:18:16.531553   21056 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [auto-722000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0226 03:18:16.780611   21056 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0226 03:18:16.780727   21056 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [auto-722000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0226 03:18:16.915241   21056 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0226 03:18:17.042309   21056 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0226 03:18:17.172628   21056 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0226 03:18:17.172830   21056 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0226 03:18:17.402754   21056 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0226 03:18:17.454438   21056 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0226 03:18:17.677088   21056 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0226 03:18:17.728922   21056 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0226 03:18:17.729311   21056 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0226 03:18:17.731622   21056 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0226 03:18:17.773792   21056 out.go:204]   - Booting up control plane ...
	I0226 03:18:17.773910   21056 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0226 03:18:17.774006   21056 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0226 03:18:17.774101   21056 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0226 03:18:17.774233   21056 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0226 03:18:17.774325   21056 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0226 03:18:17.774388   21056 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0226 03:18:17.824198   21056 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0226 03:18:16.079134   21184 machine.go:88] provisioning docker machine ...
	I0226 03:18:16.079171   21184 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-909000"
	I0226 03:18:16.079254   21184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-909000
	I0226 03:18:16.130706   21184 main.go:141] libmachine: Using SSH client type: native
	I0226 03:18:16.130957   21184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc51c920] 0xc51f680 <nil>  [] 0s} 127.0.0.1 60175 <nil> <nil>}
	I0226 03:18:16.130968   21184 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-909000 && echo "kubernetes-upgrade-909000" | sudo tee /etc/hostname
	I0226 03:18:16.285880   21184 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-909000
	
	I0226 03:18:16.285977   21184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-909000
	I0226 03:18:16.394631   21184 main.go:141] libmachine: Using SSH client type: native
	I0226 03:18:16.394810   21184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc51c920] 0xc51f680 <nil>  [] 0s} 127.0.0.1 60175 <nil> <nil>}
	I0226 03:18:16.394822   21184 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-909000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-909000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-909000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0226 03:18:16.530908   21184 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0226 03:18:16.530933   21184 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/18222-9538/.minikube CaCertPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18222-9538/.minikube}
	I0226 03:18:16.530955   21184 ubuntu.go:177] setting up certificates
	I0226 03:18:16.530971   21184 provision.go:83] configureAuth start
	I0226 03:18:16.531064   21184 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-909000
	I0226 03:18:16.581904   21184 provision.go:138] copyHostCerts
	I0226 03:18:16.581992   21184 exec_runner.go:144] found /Users/jenkins/minikube-integration/18222-9538/.minikube/key.pem, removing ...
	I0226 03:18:16.582001   21184 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18222-9538/.minikube/key.pem
	I0226 03:18:16.582134   21184 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18222-9538/.minikube/key.pem (1675 bytes)
	I0226 03:18:16.582446   21184 exec_runner.go:144] found /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.pem, removing ...
	I0226 03:18:16.582455   21184 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.pem
	I0226 03:18:16.582531   21184 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.pem (1082 bytes)
	I0226 03:18:16.582762   21184 exec_runner.go:144] found /Users/jenkins/minikube-integration/18222-9538/.minikube/cert.pem, removing ...
	I0226 03:18:16.582769   21184 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18222-9538/.minikube/cert.pem
	I0226 03:18:16.582838   21184 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18222-9538/.minikube/cert.pem (1123 bytes)
	I0226 03:18:16.583012   21184 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-909000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-909000]
	I0226 03:18:16.656066   21184 provision.go:172] copyRemoteCerts
	I0226 03:18:16.656145   21184 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0226 03:18:16.656199   21184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-909000
	I0226 03:18:16.707917   21184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60175 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/kubernetes-upgrade-909000/id_rsa Username:docker}
	I0226 03:18:16.809050   21184 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0226 03:18:16.851028   21184 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0226 03:18:16.892843   21184 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0226 03:18:16.934414   21184 provision.go:86] duration metric: configureAuth took 403.423463ms
	I0226 03:18:16.934433   21184 ubuntu.go:193] setting minikube options for container-runtime
	I0226 03:18:16.934580   21184 config.go:182] Loaded profile config "kubernetes-upgrade-909000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0226 03:18:16.934653   21184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-909000
	I0226 03:18:16.989496   21184 main.go:141] libmachine: Using SSH client type: native
	I0226 03:18:16.989708   21184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc51c920] 0xc51f680 <nil>  [] 0s} 127.0.0.1 60175 <nil> <nil>}
	I0226 03:18:16.989717   21184 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0226 03:18:17.124505   21184 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0226 03:18:17.124527   21184 ubuntu.go:71] root file system type: overlay
	I0226 03:18:17.124629   21184 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0226 03:18:17.124723   21184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-909000
	I0226 03:18:17.177128   21184 main.go:141] libmachine: Using SSH client type: native
	I0226 03:18:17.177305   21184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc51c920] 0xc51f680 <nil>  [] 0s} 127.0.0.1 60175 <nil> <nil>}
	I0226 03:18:17.177351   21184 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0226 03:18:17.336150   21184 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0226 03:18:17.336244   21184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-909000
	I0226 03:18:17.388894   21184 main.go:141] libmachine: Using SSH client type: native
	I0226 03:18:17.389093   21184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc51c920] 0xc51f680 <nil>  [] 0s} 127.0.0.1 60175 <nil> <nil>}
	I0226 03:18:17.389112   21184 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0226 03:18:17.535151   21184 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0226 03:18:17.535165   21184 machine.go:91] provisioned docker machine in 1.45600738s
	I0226 03:18:17.535176   21184 start.go:300] post-start starting for "kubernetes-upgrade-909000" (driver="docker")
	I0226 03:18:17.535184   21184 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0226 03:18:17.535243   21184 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0226 03:18:17.535308   21184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-909000
	I0226 03:18:17.587477   21184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60175 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/kubernetes-upgrade-909000/id_rsa Username:docker}
	I0226 03:18:17.690416   21184 ssh_runner.go:195] Run: cat /etc/os-release
	I0226 03:18:17.694739   21184 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0226 03:18:17.694765   21184 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0226 03:18:17.694771   21184 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0226 03:18:17.694777   21184 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0226 03:18:17.694785   21184 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18222-9538/.minikube/addons for local assets ...
	I0226 03:18:17.694878   21184 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18222-9538/.minikube/files for local assets ...
	I0226 03:18:17.695017   21184 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem -> 100262.pem in /etc/ssl/certs
	I0226 03:18:17.695182   21184 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0226 03:18:17.710209   21184 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem --> /etc/ssl/certs/100262.pem (1708 bytes)
	I0226 03:18:17.751833   21184 start.go:303] post-start completed in 216.646687ms
	I0226 03:18:17.751902   21184 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0226 03:18:17.751964   21184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-909000
	I0226 03:18:17.806267   21184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60175 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/kubernetes-upgrade-909000/id_rsa Username:docker}
	I0226 03:18:17.896979   21184 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0226 03:18:17.902550   21184 fix.go:56] fixHost completed within 1.938412461s
	I0226 03:18:17.902570   21184 start.go:83] releasing machines lock for "kubernetes-upgrade-909000", held for 1.938460302s
	I0226 03:18:17.902664   21184 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-909000
	I0226 03:18:17.955429   21184 ssh_runner.go:195] Run: cat /version.json
	I0226 03:18:17.955462   21184 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0226 03:18:17.955509   21184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-909000
	I0226 03:18:17.955547   21184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-909000
	I0226 03:18:18.013030   21184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60175 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/kubernetes-upgrade-909000/id_rsa Username:docker}
	I0226 03:18:18.013035   21184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60175 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/kubernetes-upgrade-909000/id_rsa Username:docker}
	I0226 03:18:18.206509   21184 ssh_runner.go:195] Run: systemctl --version
	I0226 03:18:18.211666   21184 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0226 03:18:18.216671   21184 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0226 03:18:18.216729   21184 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0226 03:18:18.232273   21184 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0226 03:18:18.247681   21184 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0226 03:18:18.247701   21184 start.go:475] detecting cgroup driver to use...
	I0226 03:18:18.247713   21184 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0226 03:18:18.247823   21184 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0226 03:18:18.277862   21184 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0226 03:18:18.297143   21184 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0226 03:18:18.318633   21184 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0226 03:18:18.318721   21184 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0226 03:18:18.337171   21184 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0226 03:18:18.354021   21184 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0226 03:18:18.370904   21184 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0226 03:18:18.386908   21184 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0226 03:18:18.402920   21184 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0226 03:18:18.419327   21184 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0226 03:18:18.435940   21184 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0226 03:18:18.453308   21184 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 03:18:18.521376   21184 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0226 03:18:22.828900   21056 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.004217 seconds
	I0226 03:18:22.829073   21056 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0226 03:18:22.838778   21056 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0226 03:18:23.355985   21056 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0226 03:18:23.356149   21056 kubeadm.go:322] [mark-control-plane] Marking the node auto-722000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0226 03:18:23.864900   21056 kubeadm.go:322] [bootstrap-token] Using token: ofr90s.iiey76krrurq3r8w
	I0226 03:18:23.886203   21056 out.go:204]   - Configuring RBAC rules ...
	I0226 03:18:23.886294   21056 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0226 03:18:23.925519   21056 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0226 03:18:23.931086   21056 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0226 03:18:23.933511   21056 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0226 03:18:23.936130   21056 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0226 03:18:23.938776   21056 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0226 03:18:23.948572   21056 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0226 03:18:24.076403   21056 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0226 03:18:24.401917   21056 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0226 03:18:24.402484   21056 kubeadm.go:322] 
	I0226 03:18:24.402613   21056 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0226 03:18:24.402625   21056 kubeadm.go:322] 
	I0226 03:18:24.402722   21056 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0226 03:18:24.402731   21056 kubeadm.go:322] 
	I0226 03:18:24.402766   21056 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0226 03:18:24.402858   21056 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0226 03:18:24.402910   21056 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0226 03:18:24.402916   21056 kubeadm.go:322] 
	I0226 03:18:24.402952   21056 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0226 03:18:24.402956   21056 kubeadm.go:322] 
	I0226 03:18:24.402997   21056 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0226 03:18:24.403002   21056 kubeadm.go:322] 
	I0226 03:18:24.403049   21056 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0226 03:18:24.403129   21056 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0226 03:18:24.403209   21056 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0226 03:18:24.403216   21056 kubeadm.go:322] 
	I0226 03:18:24.403294   21056 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0226 03:18:24.403379   21056 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0226 03:18:24.403387   21056 kubeadm.go:322] 
	I0226 03:18:24.403503   21056 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ofr90s.iiey76krrurq3r8w \
	I0226 03:18:24.403657   21056 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d69ca26d920a227de4ff10d4f1e52ac20fe76071c84e24bcb329e717c0642855 \
	I0226 03:18:24.403705   21056 kubeadm.go:322] 	--control-plane 
	I0226 03:18:24.403732   21056 kubeadm.go:322] 
	I0226 03:18:24.403794   21056 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0226 03:18:24.403798   21056 kubeadm.go:322] 
	I0226 03:18:24.403928   21056 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ofr90s.iiey76krrurq3r8w \
	I0226 03:18:24.404042   21056 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d69ca26d920a227de4ff10d4f1e52ac20fe76071c84e24bcb329e717c0642855 
	I0226 03:18:24.408920   21056 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0226 03:18:24.409049   21056 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0226 03:18:24.409058   21056 cni.go:84] Creating CNI manager for ""
	I0226 03:18:24.409068   21056 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0226 03:18:24.433239   21056 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0226 03:18:24.454316   21056 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0226 03:18:24.471302   21056 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0226 03:18:24.499953   21056 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0226 03:18:24.500036   21056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4011915ad0e9b27ff42994854397cc2ed93516c6 minikube.k8s.io/name=auto-722000 minikube.k8s.io/updated_at=2024_02_26T03_18_24_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 03:18:24.500045   21056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 03:18:24.617802   21056 ops.go:34] apiserver oom_adj: -16
	I0226 03:18:24.617888   21056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 03:18:25.118029   21056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 03:18:25.618029   21056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 03:18:26.117999   21056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 03:18:26.618953   21056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 03:18:27.118027   21056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 03:18:27.617980   21056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 03:18:28.119032   21056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 03:18:28.618081   21056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 03:18:28.682860   21184 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (10.161382769s)
	I0226 03:18:28.682879   21184 start.go:475] detecting cgroup driver to use...
	I0226 03:18:28.682891   21184 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0226 03:18:28.682963   21184 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0226 03:18:28.705179   21184 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0226 03:18:28.705268   21184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0226 03:18:28.724158   21184 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0226 03:18:28.756664   21184 ssh_runner.go:195] Run: which cri-dockerd
	I0226 03:18:28.760926   21184 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0226 03:18:28.775271   21184 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0226 03:18:28.808308   21184 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0226 03:18:28.882253   21184 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0226 03:18:28.981122   21184 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0226 03:18:28.981209   21184 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0226 03:18:29.014114   21184 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 03:18:29.093300   21184 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0226 03:18:29.380168   21184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0226 03:18:29.398008   21184 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0226 03:18:29.420301   21184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0226 03:18:29.437101   21184 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0226 03:18:29.502509   21184 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0226 03:18:29.567190   21184 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 03:18:29.631009   21184 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0226 03:18:29.657536   21184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0226 03:18:29.675748   21184 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 03:18:29.741461   21184 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0226 03:18:29.836440   21184 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0226 03:18:29.836528   21184 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0226 03:18:29.840929   21184 start.go:543] Will wait 60s for crictl version
	I0226 03:18:29.840990   21184 ssh_runner.go:195] Run: which crictl
	I0226 03:18:29.845062   21184 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0226 03:18:29.896237   21184 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.3
	RuntimeApiVersion:  v1
	I0226 03:18:29.896321   21184 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0226 03:18:29.918333   21184 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0226 03:18:29.966637   21184 out.go:204] * Preparing Kubernetes v1.29.0-rc.2 on Docker 25.0.3 ...
	I0226 03:18:29.966799   21184 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-909000 dig +short host.docker.internal
	I0226 03:18:30.070613   21184 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0226 03:18:30.070707   21184 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0226 03:18:30.075378   21184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-909000
	I0226 03:18:30.127160   21184 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0226 03:18:30.127233   21184 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0226 03:18:30.149254   21184 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0226 03:18:30.149277   21184 docker.go:615] Images already preloaded, skipping extraction
	I0226 03:18:30.149355   21184 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0226 03:18:29.118013   21056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 03:18:29.618152   21056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 03:18:30.118014   21056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 03:18:30.618068   21056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 03:18:31.118093   21056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 03:18:31.618223   21056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 03:18:32.118644   21056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 03:18:32.618089   21056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 03:18:33.118737   21056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 03:18:33.618137   21056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 03:18:30.169662   21184 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0226 03:18:30.176974   21184 cache_images.go:84] Images are preloaded, skipping loading
	I0226 03:18:30.177077   21184 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0226 03:18:30.229430   21184 cni.go:84] Creating CNI manager for ""
	I0226 03:18:30.229462   21184 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0226 03:18:30.229488   21184 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0226 03:18:30.229516   21184 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-909000 NodeName:kubernetes-upgrade-909000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0226 03:18:30.229695   21184 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-909000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0226 03:18:30.229756   21184 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=kubernetes-upgrade-909000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-909000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0226 03:18:30.229825   21184 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0226 03:18:30.245119   21184 binaries.go:44] Found k8s binaries, skipping transfer
	I0226 03:18:30.245185   21184 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0226 03:18:30.260041   21184 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (391 bytes)
	I0226 03:18:30.288480   21184 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0226 03:18:30.317395   21184 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2113 bytes)
	I0226 03:18:30.347074   21184 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0226 03:18:30.351820   21184 certs.go:56] Setting up /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000 for IP: 192.168.76.2
	I0226 03:18:30.351843   21184 certs.go:190] acquiring lock for shared ca certs: {Name:mkac1efdcc7c5f1039385f86b148562f7ea05475 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:18:30.352033   21184 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.key
	I0226 03:18:30.352106   21184 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18222-9538/.minikube/proxy-client-ca.key
	I0226 03:18:30.352231   21184 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/client.key
	I0226 03:18:30.352304   21184 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/apiserver.key.31bdca25
	I0226 03:18:30.352377   21184 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/proxy-client.key
	I0226 03:18:30.352630   21184 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/10026.pem (1338 bytes)
	W0226 03:18:30.352681   21184 certs.go:433] ignoring /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/10026_empty.pem, impossibly tiny 0 bytes
	I0226 03:18:30.352690   21184 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca-key.pem (1675 bytes)
	I0226 03:18:30.352736   21184 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem (1082 bytes)
	I0226 03:18:30.352778   21184 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/cert.pem (1123 bytes)
	I0226 03:18:30.352823   21184 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/key.pem (1675 bytes)
	I0226 03:18:30.352914   21184 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem (1708 bytes)
	I0226 03:18:30.353466   21184 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0226 03:18:30.396450   21184 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0226 03:18:30.438122   21184 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0226 03:18:30.478029   21184 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0226 03:18:30.518958   21184 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0226 03:18:30.559481   21184 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0226 03:18:30.600135   21184 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0226 03:18:30.641007   21184 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0226 03:18:30.685798   21184 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem --> /usr/share/ca-certificates/100262.pem (1708 bytes)
	I0226 03:18:30.726485   21184 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0226 03:18:30.766906   21184 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/10026.pem --> /usr/share/ca-certificates/10026.pem (1338 bytes)
	I0226 03:18:30.806814   21184 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0226 03:18:30.835231   21184 ssh_runner.go:195] Run: openssl version
	I0226 03:18:30.840719   21184 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100262.pem && ln -fs /usr/share/ca-certificates/100262.pem /etc/ssl/certs/100262.pem"
	I0226 03:18:30.856243   21184 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100262.pem
	I0226 03:18:30.861224   21184 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 26 10:36 /usr/share/ca-certificates/100262.pem
	I0226 03:18:30.861275   21184 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100262.pem
	I0226 03:18:30.867808   21184 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100262.pem /etc/ssl/certs/3ec20f2e.0"
	I0226 03:18:30.883016   21184 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0226 03:18:30.898907   21184 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0226 03:18:30.903163   21184 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 26 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0226 03:18:30.903214   21184 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0226 03:18:30.909829   21184 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0226 03:18:30.924714   21184 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10026.pem && ln -fs /usr/share/ca-certificates/10026.pem /etc/ssl/certs/10026.pem"
	I0226 03:18:30.940863   21184 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10026.pem
	I0226 03:18:30.945079   21184 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 26 10:36 /usr/share/ca-certificates/10026.pem
	I0226 03:18:30.945127   21184 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10026.pem
	I0226 03:18:30.951508   21184 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10026.pem /etc/ssl/certs/51391683.0"
	I0226 03:18:30.967069   21184 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0226 03:18:30.971123   21184 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0226 03:18:30.977422   21184 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0226 03:18:30.984337   21184 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0226 03:18:30.991364   21184 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0226 03:18:30.997991   21184 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0226 03:18:31.004480   21184 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0226 03:18:31.010897   21184 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-909000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-909000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 03:18:31.011000   21184 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0226 03:18:31.028710   21184 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0226 03:18:31.043949   21184 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0226 03:18:31.043966   21184 kubeadm.go:636] restartCluster start
	I0226 03:18:31.044027   21184 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0226 03:18:31.058765   21184 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:18:31.058860   21184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-909000
	I0226 03:18:31.110278   21184 kubeconfig.go:92] found "kubernetes-upgrade-909000" server: "https://127.0.0.1:60179"
	I0226 03:18:31.110796   21184 kapi.go:59] client config for kubernetes-upgrade-909000: &rest.Config{Host:"https://127.0.0.1:60179", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/client.key", CAFile:"/Users/jenkins/minikube-integration/18222-9538/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xd91a5c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0226 03:18:31.111378   21184 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0226 03:18:31.127355   21184 api_server.go:166] Checking apiserver status ...
	I0226 03:18:31.127426   21184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:18:31.144038   21184 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:18:31.627385   21184 api_server.go:166] Checking apiserver status ...
	I0226 03:18:31.627469   21184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:18:31.644247   21184 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:18:32.127518   21184 api_server.go:166] Checking apiserver status ...
	I0226 03:18:32.127652   21184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:18:32.144612   21184 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:18:32.627445   21184 api_server.go:166] Checking apiserver status ...
	I0226 03:18:32.627536   21184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:18:32.644381   21184 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:18:33.127510   21184 api_server.go:166] Checking apiserver status ...
	I0226 03:18:33.127599   21184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:18:33.144446   21184 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:18:33.627690   21184 api_server.go:166] Checking apiserver status ...
	I0226 03:18:33.627750   21184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:18:33.644371   21184 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:18:34.127431   21184 api_server.go:166] Checking apiserver status ...
	I0226 03:18:34.127538   21184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:18:34.152379   21184 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:18:34.628376   21184 api_server.go:166] Checking apiserver status ...
	I0226 03:18:34.628449   21184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:18:34.655444   21184 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/14294/cgroup
	W0226 03:18:34.700900   21184 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/14294/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:18:34.700968   21184 ssh_runner.go:195] Run: ls
	I0226 03:18:34.705651   21184 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60179/healthz ...
	I0226 03:18:34.118187   21056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 03:18:34.618153   21056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 03:18:35.118901   21056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 03:18:35.618037   21056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 03:18:36.118077   21056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 03:18:36.619705   21056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 03:18:37.119349   21056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 03:18:37.207509   21056 kubeadm.go:1088] duration metric: took 12.707449194s to wait for elevateKubeSystemPrivileges.
	I0226 03:18:37.207543   21056 kubeadm.go:406] StartCluster complete in 22.262080358s
	I0226 03:18:37.207565   21056 settings.go:142] acquiring lock: {Name:mka913612bc349b92ac5926f4ed5df6954261df0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:18:37.207680   21056 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18222-9538/kubeconfig
	I0226 03:18:37.208478   21056 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/kubeconfig: {Name:mk55c402e0c5e83ba737512b9e22b403be7d3c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:18:37.208777   21056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0226 03:18:37.208802   21056 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0226 03:18:37.208854   21056 addons.go:69] Setting storage-provisioner=true in profile "auto-722000"
	I0226 03:18:37.208875   21056 addons.go:234] Setting addon storage-provisioner=true in "auto-722000"
	I0226 03:18:37.208879   21056 addons.go:69] Setting default-storageclass=true in profile "auto-722000"
	I0226 03:18:37.208908   21056 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-722000"
	I0226 03:18:37.208925   21056 config.go:182] Loaded profile config "auto-722000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0226 03:18:37.208933   21056 host.go:66] Checking if "auto-722000" exists ...
	I0226 03:18:37.209307   21056 cli_runner.go:164] Run: docker container inspect auto-722000 --format={{.State.Status}}
	I0226 03:18:37.209446   21056 cli_runner.go:164] Run: docker container inspect auto-722000 --format={{.State.Status}}
	I0226 03:18:37.287935   21056 addons.go:234] Setting addon default-storageclass=true in "auto-722000"
	I0226 03:18:37.287973   21056 host.go:66] Checking if "auto-722000" exists ...
	I0226 03:18:37.288336   21056 cli_runner.go:164] Run: docker container inspect auto-722000 --format={{.State.Status}}
	I0226 03:18:37.320214   21056 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 03:18:37.357339   21056 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0226 03:18:37.357355   21056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0226 03:18:37.357441   21056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-722000
	I0226 03:18:37.368810   21056 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0226 03:18:37.368841   21056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0226 03:18:37.368960   21056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-722000
	I0226 03:18:37.372729   21056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0226 03:18:37.440400   21056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60454 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/auto-722000/id_rsa Username:docker}
	I0226 03:18:37.446540   21056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60454 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/auto-722000/id_rsa Username:docker}
	I0226 03:18:37.798180   21056 kapi.go:248] "coredns" deployment in "kube-system" namespace and "auto-722000" context rescaled to 1 replicas
	I0226 03:18:37.798215   21056 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0226 03:18:37.823410   21056 out.go:177] * Verifying Kubernetes components...
	I0226 03:18:37.830692   21056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0226 03:18:37.866361   21056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 03:18:37.831108   21056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0226 03:18:39.005930   21056 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.633143154s)
	I0226 03:18:39.005987   21056 start.go:929] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I0226 03:18:39.178787   21056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.312463625s)
	I0226 03:18:39.178822   21056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.312424114s)
	I0226 03:18:39.178852   21056 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.312465047s)
	I0226 03:18:39.178945   21056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" auto-722000
	I0226 03:18:39.206455   21056 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0226 03:18:37.277271   21184 api_server.go:279] https://127.0.0.1:60179/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0226 03:18:37.277322   21184 retry.go:31] will retry after 248.701491ms: https://127.0.0.1:60179/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0226 03:18:37.526644   21184 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60179/healthz ...
	I0226 03:18:37.531569   21184 api_server.go:279] https://127.0.0.1:60179/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 03:18:37.531603   21184 retry.go:31] will retry after 359.00817ms: https://127.0.0.1:60179/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 03:18:37.890841   21184 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60179/healthz ...
	I0226 03:18:37.901635   21184 api_server.go:279] https://127.0.0.1:60179/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 03:18:37.901682   21184 retry.go:31] will retry after 359.094532ms: https://127.0.0.1:60179/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 03:18:38.261833   21184 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60179/healthz ...
	I0226 03:18:38.267171   21184 api_server.go:279] https://127.0.0.1:60179/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 03:18:38.267194   21184 retry.go:31] will retry after 452.163644ms: https://127.0.0.1:60179/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 03:18:38.719685   21184 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60179/healthz ...
	I0226 03:18:38.729899   21184 api_server.go:279] https://127.0.0.1:60179/healthz returned 200:
	ok
	I0226 03:18:38.743128   21184 system_pods.go:86] 4 kube-system pods found
	I0226 03:18:38.743150   21184 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-909000" [8e6cb743-1ca2-498d-92c5-afee5f410fed] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0226 03:18:38.743161   21184 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-909000" [49f56e65-cd3a-4bf2-8602-fd1d1633f460] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0226 03:18:38.743168   21184 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-909000" [daa03c28-fc98-4f71-ae73-cf24837fe8fb] Pending
	I0226 03:18:38.743174   21184 system_pods.go:89] "storage-provisioner" [b11cde95-b0d0-4e81-b114-49b6941634f4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0226 03:18:38.743182   21184 kubeadm.go:620] needs reconfigure: missing components: kube-dns, etcd, kube-proxy, kube-scheduler
	I0226 03:18:38.743189   21184 kubeadm.go:1135] stopping kube-system containers ...
	I0226 03:18:38.743257   21184 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0226 03:18:38.763095   21184 docker.go:483] Stopping containers: [279019ebc9ff 21a879cca228 bcff6c946c40 246ddc3725e4 6c6d6b6c3759 a43349b94209 f9606f266f2a 89201fdb2e1a 2d5fbbaafcd5 8e3330ae1173 fe6d5dbaca4e 9855c052aff9 093d23e5792a 33221fde759d 76fd43d238d9]
	I0226 03:18:38.763180   21184 ssh_runner.go:195] Run: docker stop 279019ebc9ff 21a879cca228 bcff6c946c40 246ddc3725e4 6c6d6b6c3759 a43349b94209 f9606f266f2a 89201fdb2e1a 2d5fbbaafcd5 8e3330ae1173 fe6d5dbaca4e 9855c052aff9 093d23e5792a 33221fde759d 76fd43d238d9
	I0226 03:18:39.741730   21184 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0226 03:18:39.771135   21184 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0226 03:18:39.788256   21184 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5703 Feb 26 11:14 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5739 Feb 26 11:14 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5823 Feb 26 11:14 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5687 Feb 26 11:14 /etc/kubernetes/scheduler.conf
	
	I0226 03:18:39.788316   21184 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0226 03:18:39.851389   21184 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0226 03:18:39.867854   21184 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0226 03:18:39.883306   21184 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0226 03:18:39.898903   21184 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0226 03:18:39.915876   21184 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0226 03:18:39.915890   21184 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0226 03:18:39.965596   21184 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0226 03:18:39.242519   21056 addons.go:505] enable addons completed in 2.03370846s: enabled=[storage-provisioner default-storageclass]
	I0226 03:18:39.250738   21056 node_ready.go:35] waiting up to 15m0s for node "auto-722000" to be "Ready" ...
	I0226 03:18:39.253910   21056 node_ready.go:49] node "auto-722000" has status "Ready":"True"
	I0226 03:18:39.253924   21056 node_ready.go:38] duration metric: took 3.168777ms waiting for node "auto-722000" to be "Ready" ...
	I0226 03:18:39.253932   21056 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0226 03:18:39.259954   21056 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-2sskz" in "kube-system" namespace to be "Ready" ...
	I0226 03:18:41.267817   21056 pod_ready.go:102] pod "coredns-5dd5756b68-2sskz" in "kube-system" namespace has status "Ready":"False"
	I0226 03:18:40.516031   21184 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0226 03:18:40.650634   21184 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0226 03:18:40.708227   21184 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0226 03:18:40.841459   21184 api_server.go:52] waiting for apiserver process to appear ...
	I0226 03:18:40.841588   21184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:18:41.341855   21184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:18:41.843629   21184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:18:41.862146   21184 api_server.go:72] duration metric: took 1.020681972s to wait for apiserver process to appear ...
	I0226 03:18:41.862160   21184 api_server.go:88] waiting for apiserver healthz status ...
	I0226 03:18:41.862177   21184 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60179/healthz ...
	I0226 03:18:43.646755   21184 api_server.go:279] https://127.0.0.1:60179/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0226 03:18:43.646783   21184 api_server.go:103] status: https://127.0.0.1:60179/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0226 03:18:43.646796   21184 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60179/healthz ...
	I0226 03:18:43.657467   21184 api_server.go:279] https://127.0.0.1:60179/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0226 03:18:43.657487   21184 api_server.go:103] status: https://127.0.0.1:60179/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0226 03:18:43.862863   21184 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60179/healthz ...
	I0226 03:18:43.869443   21184 api_server.go:279] https://127.0.0.1:60179/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 03:18:43.869459   21184 api_server.go:103] status: https://127.0.0.1:60179/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 03:18:44.362314   21184 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60179/healthz ...
	I0226 03:18:44.367081   21184 api_server.go:279] https://127.0.0.1:60179/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 03:18:44.367096   21184 api_server.go:103] status: https://127.0.0.1:60179/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 03:18:44.862900   21184 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60179/healthz ...
	I0226 03:18:44.868058   21184 api_server.go:279] https://127.0.0.1:60179/healthz returned 200:
	ok
	I0226 03:18:44.874017   21184 api_server.go:141] control plane version: v1.29.0-rc.2
	I0226 03:18:44.874030   21184 api_server.go:131] duration metric: took 3.01184677s to wait for apiserver health ...
	I0226 03:18:44.874036   21184 cni.go:84] Creating CNI manager for ""
	I0226 03:18:44.874044   21184 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0226 03:18:44.898835   21184 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0226 03:18:44.922817   21184 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0226 03:18:44.940249   21184 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0226 03:18:44.968412   21184 system_pods.go:43] waiting for kube-system pods to appear ...
	I0226 03:18:44.974685   21184 system_pods.go:59] 4 kube-system pods found
	I0226 03:18:44.974701   21184 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-909000" [8e6cb743-1ca2-498d-92c5-afee5f410fed] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0226 03:18:44.974711   21184 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-909000" [49f56e65-cd3a-4bf2-8602-fd1d1633f460] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0226 03:18:44.974718   21184 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-909000" [daa03c28-fc98-4f71-ae73-cf24837fe8fb] Pending
	I0226 03:18:44.974722   21184 system_pods.go:61] "storage-provisioner" [b11cde95-b0d0-4e81-b114-49b6941634f4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0226 03:18:44.974727   21184 system_pods.go:74] duration metric: took 6.303777ms to wait for pod list to return data ...
	I0226 03:18:44.974732   21184 node_conditions.go:102] verifying NodePressure condition ...
	I0226 03:18:44.977610   21184 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0226 03:18:44.977623   21184 node_conditions.go:123] node cpu capacity is 12
	I0226 03:18:44.977635   21184 node_conditions.go:105] duration metric: took 2.895203ms to run NodePressure ...
	I0226 03:18:44.977645   21184 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0226 03:18:45.232195   21184 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0226 03:18:45.239792   21184 ops.go:34] apiserver oom_adj: -16
	I0226 03:18:45.239815   21184 kubeadm.go:640] restartCluster took 14.195745313s
	I0226 03:18:45.239823   21184 kubeadm.go:406] StartCluster complete in 14.228838864s
	I0226 03:18:45.239839   21184 settings.go:142] acquiring lock: {Name:mka913612bc349b92ac5926f4ed5df6954261df0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:18:45.239917   21184 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18222-9538/kubeconfig
	I0226 03:18:45.240613   21184 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/kubeconfig: {Name:mk55c402e0c5e83ba737512b9e22b403be7d3c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:18:45.240912   21184 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0226 03:18:45.240931   21184 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0226 03:18:45.240964   21184 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-909000"
	I0226 03:18:45.240977   21184 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-909000"
	I0226 03:18:45.240990   21184 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-909000"
	I0226 03:18:45.240995   21184 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-909000"
	W0226 03:18:45.240999   21184 addons.go:243] addon storage-provisioner should already be in state true
	I0226 03:18:45.241042   21184 host.go:66] Checking if "kubernetes-upgrade-909000" exists ...
	I0226 03:18:45.241107   21184 config.go:182] Loaded profile config "kubernetes-upgrade-909000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0226 03:18:45.241248   21184 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-909000 --format={{.State.Status}}
	I0226 03:18:45.241333   21184 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-909000 --format={{.State.Status}}
	I0226 03:18:45.241867   21184 kapi.go:59] client config for kubernetes-upgrade-909000: &rest.Config{Host:"https://127.0.0.1:60179", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/client.key", CAFile:"/Users/jenkins/minikube-integration/18222-9538/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xd91a5c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0226 03:18:45.249615   21184 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubernetes-upgrade-909000" context rescaled to 1 replicas
	I0226 03:18:45.249655   21184 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0226 03:18:45.271188   21184 out.go:177] * Verifying Kubernetes components...
	I0226 03:18:45.315077   21184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 03:18:45.343102   21184 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 03:18:45.322878   21184 kapi.go:59] client config for kubernetes-upgrade-909000: &rest.Config{Host:"https://127.0.0.1:60179", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubernetes-upgrade-909000/client.key", CAFile:"/Users/jenkins/minikube-integration/18222-9538/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xd91a5c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0226 03:18:45.327021   21184 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0226 03:18:45.334554   21184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-909000
	I0226 03:18:45.343400   21184 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-909000"
	W0226 03:18:45.363952   21184 addons.go:243] addon default-storageclass should already be in state true
	I0226 03:18:45.363990   21184 host.go:66] Checking if "kubernetes-upgrade-909000" exists ...
	I0226 03:18:45.364075   21184 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0226 03:18:45.364093   21184 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0226 03:18:45.364187   21184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-909000
	I0226 03:18:45.366973   21184 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-909000 --format={{.State.Status}}
	I0226 03:18:45.420795   21184 api_server.go:52] waiting for apiserver process to appear ...
	I0226 03:18:45.420910   21184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60175 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/kubernetes-upgrade-909000/id_rsa Username:docker}
	I0226 03:18:45.420926   21184 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0226 03:18:45.420938   21184 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0226 03:18:45.420949   21184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:18:45.421025   21184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-909000
	I0226 03:18:45.439914   21184 api_server.go:72] duration metric: took 190.230938ms to wait for apiserver process to appear ...
	I0226 03:18:45.439939   21184 api_server.go:88] waiting for apiserver healthz status ...
	I0226 03:18:45.439962   21184 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60179/healthz ...
	I0226 03:18:45.444410   21184 api_server.go:279] https://127.0.0.1:60179/healthz returned 200:
	ok
	I0226 03:18:45.445817   21184 api_server.go:141] control plane version: v1.29.0-rc.2
	I0226 03:18:45.445835   21184 api_server.go:131] duration metric: took 5.888675ms to wait for apiserver health ...
	I0226 03:18:45.445844   21184 system_pods.go:43] waiting for kube-system pods to appear ...
	I0226 03:18:45.449533   21184 system_pods.go:59] 4 kube-system pods found
	I0226 03:18:45.449548   21184 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-909000" [8e6cb743-1ca2-498d-92c5-afee5f410fed] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0226 03:18:45.449556   21184 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-909000" [49f56e65-cd3a-4bf2-8602-fd1d1633f460] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0226 03:18:45.449567   21184 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-909000" [daa03c28-fc98-4f71-ae73-cf24837fe8fb] Pending
	I0226 03:18:45.449575   21184 system_pods.go:61] "storage-provisioner" [b11cde95-b0d0-4e81-b114-49b6941634f4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0226 03:18:45.449584   21184 system_pods.go:74] duration metric: took 3.733967ms to wait for pod list to return data ...
	I0226 03:18:45.449592   21184 kubeadm.go:581] duration metric: took 199.913704ms to wait for : map[apiserver:true system_pods:true] ...
	I0226 03:18:45.449601   21184 node_conditions.go:102] verifying NodePressure condition ...
	I0226 03:18:45.452286   21184 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0226 03:18:45.452298   21184 node_conditions.go:123] node cpu capacity is 12
	I0226 03:18:45.452305   21184 node_conditions.go:105] duration metric: took 2.69977ms to run NodePressure ...
	I0226 03:18:45.452314   21184 start.go:228] waiting for startup goroutines ...
	I0226 03:18:45.473935   21184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60175 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/kubernetes-upgrade-909000/id_rsa Username:docker}
	I0226 03:18:45.536392   21184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0226 03:18:45.588669   21184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0226 03:18:46.083129   21184 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0226 03:18:46.125107   21184 addons.go:505] enable addons completed in 884.177386ms: enabled=[storage-provisioner default-storageclass]
	I0226 03:18:46.125137   21184 start.go:233] waiting for cluster config update ...
	I0226 03:18:46.125154   21184 start.go:242] writing updated cluster config ...
	I0226 03:18:46.125568   21184 ssh_runner.go:195] Run: rm -f paused
	I0226 03:18:46.168805   21184 start.go:601] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0226 03:18:46.190548   21184 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-909000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 26 11:18:29 kubernetes-upgrade-909000 cri-dockerd[13719]: time="2024-02-26T11:18:29Z" level=info msg="Docker Info: &{ID:03a4ad3b-ba9f-4200-87f1-2d18bca3e2be Containers:9 ContainersRunning:0 ContainersPaused:0 ContainersStopped:9 Images:15 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2024-02-26T11:18:29.825134561Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 KernelVersion:6.6.12-linuxkit OperatingSystem:Ubu
ntu 22.04.3 LTS OSVersion:22.04 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc00014ed90 NCPU:12 MemTotal:6213300224 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy:control-plane.minikube.internal Name:kubernetes-upgrade-909000 Labels:[provider=docker] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builtin name=cgroupns] Product
License: DefaultAddressPools:[] Warnings:[]}"
	Feb 26 11:18:29 kubernetes-upgrade-909000 cri-dockerd[13719]: time="2024-02-26T11:18:29Z" level=info msg="Setting cgroupDriver cgroupfs"
	Feb 26 11:18:29 kubernetes-upgrade-909000 cri-dockerd[13719]: time="2024-02-26T11:18:29Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Feb 26 11:18:29 kubernetes-upgrade-909000 cri-dockerd[13719]: time="2024-02-26T11:18:29Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Feb 26 11:18:29 kubernetes-upgrade-909000 cri-dockerd[13719]: time="2024-02-26T11:18:29Z" level=info msg="Start cri-dockerd grpc backend"
	Feb 26 11:18:29 kubernetes-upgrade-909000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Feb 26 11:18:34 kubernetes-upgrade-909000 cri-dockerd[13719]: time="2024-02-26T11:18:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a43349b94209003f1754b95d9f6e03ac5721868932335e81de38bc6976ead03e/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 26 11:18:34 kubernetes-upgrade-909000 cri-dockerd[13719]: time="2024-02-26T11:18:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/246ddc3725e49c8ea274c2bd01e351abf9860d059328b00e8ef2b1ec6353c4ae/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 26 11:18:34 kubernetes-upgrade-909000 cri-dockerd[13719]: time="2024-02-26T11:18:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6c6d6b6c3759cc5b86d47c644bb28cdb2e2181401bf8a6fd79a6611f4589b656/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 26 11:18:34 kubernetes-upgrade-909000 cri-dockerd[13719]: time="2024-02-26T11:18:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f9606f266f2a225203684ecf8563b833bbc1edba2ac5aa7871e389ed7b7f981e/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 26 11:18:38 kubernetes-upgrade-909000 dockerd[13483]: time="2024-02-26T11:18:38.846858474Z" level=info msg="ignoring event" container=21a879cca228eaac104b63bbff2792d66355328be75008cb4b84908f400eae19 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 26 11:18:38 kubernetes-upgrade-909000 dockerd[13483]: time="2024-02-26T11:18:38.847288393Z" level=info msg="ignoring event" container=246ddc3725e49c8ea274c2bd01e351abf9860d059328b00e8ef2b1ec6353c4ae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 26 11:18:38 kubernetes-upgrade-909000 dockerd[13483]: time="2024-02-26T11:18:38.847370975Z" level=info msg="ignoring event" container=a43349b94209003f1754b95d9f6e03ac5721868932335e81de38bc6976ead03e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 26 11:18:38 kubernetes-upgrade-909000 dockerd[13483]: time="2024-02-26T11:18:38.847572928Z" level=info msg="ignoring event" container=f9606f266f2a225203684ecf8563b833bbc1edba2ac5aa7871e389ed7b7f981e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 26 11:18:38 kubernetes-upgrade-909000 dockerd[13483]: time="2024-02-26T11:18:38.849357201Z" level=info msg="ignoring event" container=6c6d6b6c3759cc5b86d47c644bb28cdb2e2181401bf8a6fd79a6611f4589b656 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 26 11:18:38 kubernetes-upgrade-909000 dockerd[13483]: time="2024-02-26T11:18:38.853117054Z" level=info msg="ignoring event" container=bcff6c946c4090f276ee2516f9f8765d1f8489edfaa1048b66f19cd1bf229944 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 26 11:18:39 kubernetes-upgrade-909000 dockerd[13483]: time="2024-02-26T11:18:39.670965113Z" level=info msg="ignoring event" container=279019ebc9ff0d6c05ea7016ac6b639073ccebaedccb8cf9fe2c37584ea59a19 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 26 11:18:39 kubernetes-upgrade-909000 cri-dockerd[13719]: time="2024-02-26T11:18:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/83adbbef1f5773420bc95dc6c1ff420f88f0902103d499a76cf917e76c0c3087/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 26 11:18:39 kubernetes-upgrade-909000 cri-dockerd[13719]: W0226 11:18:39.775814   13719 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Feb 26 11:18:39 kubernetes-upgrade-909000 cri-dockerd[13719]: time="2024-02-26T11:18:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c9320feacdf5b8b3e076d2933d26023a9f8ae22190a3f6d0b7420a0db7d55c0e/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 26 11:18:39 kubernetes-upgrade-909000 cri-dockerd[13719]: W0226 11:18:39.780633   13719 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Feb 26 11:18:39 kubernetes-upgrade-909000 cri-dockerd[13719]: time="2024-02-26T11:18:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0b2689b4e1366581aadd17a4b422270921fd67dc74d6faa8e011ab22b8b91fba/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 26 11:18:39 kubernetes-upgrade-909000 cri-dockerd[13719]: W0226 11:18:39.783967   13719 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Feb 26 11:18:39 kubernetes-upgrade-909000 cri-dockerd[13719]: time="2024-02-26T11:18:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/102f4f1bb4ab52b45b0c4c6b21aafa39ae096aa63ea6ad8d06050d938a1d511f/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 26 11:18:39 kubernetes-upgrade-909000 cri-dockerd[13719]: W0226 11:18:39.910757   13719 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	01770285e81de       4270645ed6b7a       6 seconds ago       Running             kube-scheduler            2                   83adbbef1f577       kube-scheduler-kubernetes-upgrade-909000
	92c9b8ad75ce8       d4e01cdf63970       6 seconds ago       Running             kube-controller-manager   3                   0b2689b4e1366       kube-controller-manager-kubernetes-upgrade-909000
	b444986216faa       bbb47a0f83324       6 seconds ago       Running             kube-apiserver            2                   102f4f1bb4ab5       kube-apiserver-kubernetes-upgrade-909000
	4b206994360c2       a0eed15eed449       6 seconds ago       Running             etcd                      2                   c9320feacdf5b       etcd-kubernetes-upgrade-909000
	279019ebc9ff0       bbb47a0f83324       13 seconds ago      Exited              kube-apiserver            1                   6c6d6b6c3759c       kube-apiserver-kubernetes-upgrade-909000
	21a879cca228e       4270645ed6b7a       13 seconds ago      Exited              kube-scheduler            1                   f9606f266f2a2       kube-scheduler-kubernetes-upgrade-909000
	bcff6c946c409       a0eed15eed449       13 seconds ago      Exited              etcd                      1                   246ddc3725e49       etcd-kubernetes-upgrade-909000
	89201fdb2e1ad       d4e01cdf63970       36 seconds ago      Exited              kube-controller-manager   2                   093d23e5792a7       kube-controller-manager-kubernetes-upgrade-909000
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-909000
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-909000
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Feb 2024 11:16:50 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-909000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Feb 2024 11:18:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Feb 2024 11:18:43 +0000   Mon, 26 Feb 2024 11:16:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Feb 2024 11:18:43 +0000   Mon, 26 Feb 2024 11:16:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Feb 2024 11:18:43 +0000   Mon, 26 Feb 2024 11:16:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Feb 2024 11:18:43 +0000   Mon, 26 Feb 2024 11:18:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    kubernetes-upgrade-909000
	Capacity:
	  cpu:                12
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6067676Ki
	  pods:               110
	Allocatable:
	  cpu:                12
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6067676Ki
	  pods:               110
	System Info:
	  Machine ID:                 86bfbc5125d843528bfba347e61ea591
	  System UUID:                86bfbc5125d843528bfba347e61ea591
	  Boot ID:                    9bab7a14-a7a5-4b27-8332-833c53921260
	  Kernel Version:             6.6.12-linuxkit
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.3
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-909000                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2s
	  kube-system                 kube-apiserver-kubernetes-upgrade-909000             250m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-909000    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  kube-system                 kube-scheduler-kubernetes-upgrade-909000             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (5%!)(MISSING)   0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From     Message
	  ----    ------                   ----                 ----     -------
	  Normal  NodeHasSufficientMemory  2m1s (x8 over 2m1s)  kubelet  Node kubernetes-upgrade-909000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s (x8 over 2m1s)  kubelet  Node kubernetes-upgrade-909000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s (x7 over 2m1s)  kubelet  Node kubernetes-upgrade-909000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m1s                 kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 7s                   kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s (x9 over 7s)      kubelet  Node kubernetes-upgrade-909000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s (x7 over 7s)      kubelet  Node kubernetes-upgrade-909000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s (x7 over 7s)      kubelet  Node kubernetes-upgrade-909000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7s                   kubelet  Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	
	
	==> etcd [4b206994360c] <==
	{"level":"info","ts":"2024-02-26T11:18:41.461982Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-26T11:18:41.462326Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2024-02-26T11:18:41.46268Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2024-02-26T11:18:41.462854Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.3"}
	{"level":"info","ts":"2024-02-26T11:18:41.462908Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.3"}
	{"level":"info","ts":"2024-02-26T11:18:41.463184Z","caller":"membership/cluster.go:576","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.3","to":"3.5"}
	{"level":"info","ts":"2024-02-26T11:18:41.464078Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-26T11:18:41.464373Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-02-26T11:18:41.464449Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-02-26T11:18:41.464567Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-26T11:18:41.464599Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-26T11:18:42.452832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 4"}
	{"level":"info","ts":"2024-02-26T11:18:42.452886Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 4"}
	{"level":"info","ts":"2024-02-26T11:18:42.452983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2024-02-26T11:18:42.452997Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 5"}
	{"level":"info","ts":"2024-02-26T11:18:42.453001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 5"}
	{"level":"info","ts":"2024-02-26T11:18:42.453007Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 5"}
	{"level":"info","ts":"2024-02-26T11:18:42.453012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 5"}
	{"level":"info","ts":"2024-02-26T11:18:42.454568Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-26T11:18:42.454617Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-909000 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-26T11:18:42.45465Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-26T11:18:42.456844Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-26T11:18:42.456901Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-26T11:18:42.460819Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-26T11:18:42.461239Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> etcd [bcff6c946c40] <==
	{"level":"info","ts":"2024-02-26T11:18:35.968207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 3"}
	{"level":"info","ts":"2024-02-26T11:18:35.968254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-02-26T11:18:35.968272Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-02-26T11:18:35.968285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 4"}
	{"level":"info","ts":"2024-02-26T11:18:35.968318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2024-02-26T11:18:35.968331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 4"}
	{"level":"info","ts":"2024-02-26T11:18:35.96834Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2024-02-26T11:18:35.969892Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-909000 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-26T11:18:35.970095Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-26T11:18:35.970166Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-26T11:18:35.971082Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-26T11:18:35.971946Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-26T11:18:35.97358Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2024-02-26T11:18:35.979228Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-26T11:18:38.793914Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-02-26T11:18:38.793956Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"kubernetes-upgrade-909000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"warn","ts":"2024-02-26T11:18:38.794136Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-26T11:18:38.794231Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/02/26 11:18:38 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-02-26T11:18:38.804785Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-26T11:18:38.804848Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-02-26T11:18:38.804956Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2024-02-26T11:18:38.807355Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-02-26T11:18:38.807614Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-02-26T11:18:38.807647Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"kubernetes-upgrade-909000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 11:18:48 up 51 min,  0 users,  load average: 6.79, 5.29, 4.62
	Linux kubernetes-upgrade-909000 6.6.12-linuxkit #1 SMP PREEMPT_DYNAMIC Tue Jan 30 09:48:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kube-apiserver [279019ebc9ff] <==
	W0226 11:18:38.799602       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:18:38.799614       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:18:38.799622       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:18:38.799644       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:18:38.799660       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:18:38.799668       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:18:38.799685       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:18:38.799687       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:18:38.799706       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:18:38.799722       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:18:38.799742       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:18:38.799762       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:18:38.799778       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:18:38.799796       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:18:38.799812       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:18:38.800141       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0226 11:18:38.800179       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	W0226 11:18:38.800292       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:18:38.800336       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:18:38.800423       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:18:38.800487       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:18:38.800532       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:18:38.800549       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:18:38.800553       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 11:18:38.800708       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b444986216fa] <==
	I0226 11:18:43.581670       1 controller.go:85] Starting OpenAPI V3 controller
	I0226 11:18:43.581682       1 naming_controller.go:291] Starting NamingConditionController
	I0226 11:18:43.581694       1 establishing_controller.go:76] Starting EstablishingController
	I0226 11:18:43.581707       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0226 11:18:43.581717       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0226 11:18:43.581745       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0226 11:18:43.658386       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0226 11:18:43.676703       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0226 11:18:43.676775       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0226 11:18:43.739227       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0226 11:18:43.739404       1 aggregator.go:165] initial CRD sync complete...
	I0226 11:18:43.739456       1 autoregister_controller.go:141] Starting autoregister controller
	I0226 11:18:43.739475       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0226 11:18:43.739659       1 cache.go:39] Caches are synced for autoregister controller
	I0226 11:18:43.740192       1 shared_informer.go:318] Caches are synced for configmaps
	I0226 11:18:43.740378       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0226 11:18:43.740438       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0226 11:18:43.740584       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0226 11:18:43.751650       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0226 11:18:44.580357       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0226 11:18:45.050567       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0226 11:18:45.059134       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0226 11:18:45.078425       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0226 11:18:45.093614       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0226 11:18:45.098615       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [89201fdb2e1a] <==
	I0226 11:18:12.421122       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0226 11:18:12.421130       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0226 11:18:12.421141       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0226 11:18:12.421136       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0226 11:18:12.421231       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0226 11:18:12.421236       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0226 11:18:12.421249       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0226 11:18:12.421698       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0226 11:18:12.421841       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0226 11:18:12.421875       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0226 11:18:12.421887       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0226 11:18:12.428966       1 controllermanager.go:735] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0226 11:18:12.428993       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0226 11:18:12.429004       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0226 11:18:12.435535       1 controllermanager.go:735] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0226 11:18:12.435600       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0226 11:18:12.435729       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller"
	I0226 11:18:12.435740       1 shared_informer.go:311] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0226 11:18:12.442869       1 controllermanager.go:735] "Started controller" controller="serviceaccount-controller"
	I0226 11:18:12.443037       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0226 11:18:12.443071       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0226 11:18:12.449608       1 controllermanager.go:735] "Started controller" controller="ttl-controller"
	I0226 11:18:12.449750       1 ttl_controller.go:124] "Starting TTL controller"
	I0226 11:18:12.449758       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0226 11:18:12.484305       1 shared_informer.go:318] Caches are synced for tokens
	
	
	==> kube-controller-manager [92c9b8ad75ce] <==
	I0226 11:18:45.748177       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0226 11:18:45.748227       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0226 11:18:45.748252       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0226 11:18:45.748293       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0226 11:18:45.748306       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0226 11:18:45.748316       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0226 11:18:45.748326       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0226 11:18:45.748345       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0226 11:18:45.748354       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0226 11:18:45.748392       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0226 11:18:45.748417       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0226 11:18:45.748491       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0226 11:18:45.748506       1 controllermanager.go:735] "Started controller" controller="resourcequota-controller"
	I0226 11:18:45.748737       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0226 11:18:45.748782       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0226 11:18:45.748808       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0226 11:18:45.752951       1 shared_informer.go:318] Caches are synced for tokens
	I0226 11:18:45.756472       1 controllermanager.go:735] "Started controller" controller="deployment-controller"
	I0226 11:18:45.756773       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0226 11:18:45.756820       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0226 11:18:45.762287       1 controllermanager.go:735] "Started controller" controller="cronjob-controller"
	I0226 11:18:45.762669       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0226 11:18:45.762739       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0226 11:18:45.769073       1 controllermanager.go:735] "Started controller" controller="bootstrap-signer-controller"
	I0226 11:18:45.769240       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	
	
	==> kube-scheduler [01770285e81d] <==
	I0226 11:18:42.562687       1 serving.go:380] Generated self-signed cert in-memory
	W0226 11:18:43.639918       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0226 11:18:43.640532       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0226 11:18:43.640551       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0226 11:18:43.640692       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0226 11:18:43.741616       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0226 11:18:43.741672       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0226 11:18:43.743342       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0226 11:18:43.743434       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0226 11:18:43.743448       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0226 11:18:43.743458       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0226 11:18:43.843794       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [21a879cca228] <==
	I0226 11:18:34.687244       1 serving.go:380] Generated self-signed cert in-memory
	W0226 11:18:37.277301       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0226 11:18:37.277425       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0226 11:18:37.277447       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0226 11:18:37.277458       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0226 11:18:37.343892       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0226 11:18:37.343930       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0226 11:18:37.345054       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0226 11:18:37.345169       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0226 11:18:37.346136       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0226 11:18:37.346589       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0226 11:18:37.446628       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0226 11:18:38.792863       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0226 11:18:38.793845       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0226 11:18:38.794453       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Feb 26 11:18:40 kubernetes-upgrade-909000 kubelet[14842]: I0226 11:18:40.966278   14842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/02cbad9b08e8976831f314c5fda0c926-etc-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-909000\" (UID: \"02cbad9b08e8976831f314c5fda0c926\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-909000"
	Feb 26 11:18:40 kubernetes-upgrade-909000 kubelet[14842]: I0226 11:18:40.966295   14842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c3f966be686d49fa50a32b6ccba0e239-usr-local-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-909000\" (UID: \"c3f966be686d49fa50a32b6ccba0e239\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-909000"
	Feb 26 11:18:40 kubernetes-upgrade-909000 kubelet[14842]: I0226 11:18:40.966310   14842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/d4ee55b1c87deffb98680e70cb6fc29f-etcd-certs\") pod \"etcd-kubernetes-upgrade-909000\" (UID: \"d4ee55b1c87deffb98680e70cb6fc29f\") " pod="kube-system/etcd-kubernetes-upgrade-909000"
	Feb 26 11:18:40 kubernetes-upgrade-909000 kubelet[14842]: I0226 11:18:40.966327   14842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c3f966be686d49fa50a32b6ccba0e239-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-909000\" (UID: \"c3f966be686d49fa50a32b6ccba0e239\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-909000"
	Feb 26 11:18:40 kubernetes-upgrade-909000 kubelet[14842]: I0226 11:18:40.966341   14842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c3f966be686d49fa50a32b6ccba0e239-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-909000\" (UID: \"c3f966be686d49fa50a32b6ccba0e239\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-909000"
	Feb 26 11:18:40 kubernetes-upgrade-909000 kubelet[14842]: I0226 11:18:40.966355   14842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c3f966be686d49fa50a32b6ccba0e239-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-909000\" (UID: \"c3f966be686d49fa50a32b6ccba0e239\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-909000"
	Feb 26 11:18:40 kubernetes-upgrade-909000 kubelet[14842]: I0226 11:18:40.966368   14842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/d4ee55b1c87deffb98680e70cb6fc29f-etcd-data\") pod \"etcd-kubernetes-upgrade-909000\" (UID: \"d4ee55b1c87deffb98680e70cb6fc29f\") " pod="kube-system/etcd-kubernetes-upgrade-909000"
	Feb 26 11:18:40 kubernetes-upgrade-909000 kubelet[14842]: I0226 11:18:40.966391   14842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/02cbad9b08e8976831f314c5fda0c926-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-909000\" (UID: \"02cbad9b08e8976831f314c5fda0c926\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-909000"
	Feb 26 11:18:40 kubernetes-upgrade-909000 kubelet[14842]: I0226 11:18:40.966409   14842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/02cbad9b08e8976831f314c5fda0c926-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-909000\" (UID: \"02cbad9b08e8976831f314c5fda0c926\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-909000"
	Feb 26 11:18:41 kubernetes-upgrade-909000 kubelet[14842]: I0226 11:18:41.081060   14842 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-909000"
	Feb 26 11:18:41 kubernetes-upgrade-909000 kubelet[14842]: E0226 11:18:41.081316   14842 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="kubernetes-upgrade-909000"
	Feb 26 11:18:41 kubernetes-upgrade-909000 kubelet[14842]: I0226 11:18:41.245403   14842 scope.go:117] "RemoveContainer" containerID="bcff6c946c4090f276ee2516f9f8765d1f8489edfaa1048b66f19cd1bf229944"
	Feb 26 11:18:41 kubernetes-upgrade-909000 kubelet[14842]: I0226 11:18:41.252864   14842 scope.go:117] "RemoveContainer" containerID="279019ebc9ff0d6c05ea7016ac6b639073ccebaedccb8cf9fe2c37584ea59a19"
	Feb 26 11:18:41 kubernetes-upgrade-909000 kubelet[14842]: I0226 11:18:41.259791   14842 scope.go:117] "RemoveContainer" containerID="89201fdb2e1ad3fa3c11731cc6eadb83499e0d2f01b7176aba457744cd00f493"
	Feb 26 11:18:41 kubernetes-upgrade-909000 kubelet[14842]: I0226 11:18:41.266810   14842 scope.go:117] "RemoveContainer" containerID="21a879cca228eaac104b63bbff2792d66355328be75008cb4b84908f400eae19"
	Feb 26 11:18:41 kubernetes-upgrade-909000 kubelet[14842]: E0226 11:18:41.368070   14842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-909000?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="800ms"
	Feb 26 11:18:41 kubernetes-upgrade-909000 kubelet[14842]: I0226 11:18:41.547692   14842 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-909000"
	Feb 26 11:18:41 kubernetes-upgrade-909000 kubelet[14842]: E0226 11:18:41.548081   14842 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="kubernetes-upgrade-909000"
	Feb 26 11:18:42 kubernetes-upgrade-909000 kubelet[14842]: I0226 11:18:42.356772   14842 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-909000"
	Feb 26 11:18:43 kubernetes-upgrade-909000 kubelet[14842]: I0226 11:18:43.750197   14842 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-909000"
	Feb 26 11:18:43 kubernetes-upgrade-909000 kubelet[14842]: I0226 11:18:43.750322   14842 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-909000"
	Feb 26 11:18:43 kubernetes-upgrade-909000 kubelet[14842]: I0226 11:18:43.754852   14842 apiserver.go:52] "Watching apiserver"
	Feb 26 11:18:43 kubernetes-upgrade-909000 kubelet[14842]: I0226 11:18:43.764769   14842 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Feb 26 11:18:44 kubernetes-upgrade-909000 kubelet[14842]: E0226 11:18:44.059671   14842 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-909000\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-909000"
	Feb 26 11:18:44 kubernetes-upgrade-909000 kubelet[14842]: E0226 11:18:44.059949   14842 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-kubernetes-upgrade-909000\" already exists" pod="kube-system/kube-controller-manager-kubernetes-upgrade-909000"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-909000 -n kubernetes-upgrade-909000
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-909000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: etcd-kubernetes-upgrade-909000 kube-scheduler-kubernetes-upgrade-909000 storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-909000 describe pod etcd-kubernetes-upgrade-909000 kube-scheduler-kubernetes-upgrade-909000 storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-909000 describe pod etcd-kubernetes-upgrade-909000 kube-scheduler-kubernetes-upgrade-909000 storage-provisioner: exit status 1 (53.621195ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "etcd-kubernetes-upgrade-909000" not found
	Error from server (NotFound): pods "kube-scheduler-kubernetes-upgrade-909000" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-909000 describe pod etcd-kubernetes-upgrade-909000 kube-scheduler-kubernetes-upgrade-909000 storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-909000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-909000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-909000: (2.496873338s)
--- FAIL: TestKubernetesUpgrade (405.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (259.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-326000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0226 03:25:41.112235   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/client.crt: no such file or directory
E0226 03:25:46.882202   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/flannel-722000/client.crt: no such file or directory
E0226 03:25:46.887316   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/flannel-722000/client.crt: no such file or directory
E0226 03:25:46.898424   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/flannel-722000/client.crt: no such file or directory
E0226 03:25:46.919449   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/flannel-722000/client.crt: no such file or directory
E0226 03:25:46.959875   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/flannel-722000/client.crt: no such file or directory
E0226 03:25:47.040240   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/flannel-722000/client.crt: no such file or directory
E0226 03:25:47.201595   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/flannel-722000/client.crt: no such file or directory
E0226 03:25:47.521993   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/flannel-722000/client.crt: no such file or directory
E0226 03:25:48.162304   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/flannel-722000/client.crt: no such file or directory
E0226 03:25:49.442816   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/flannel-722000/client.crt: no such file or directory
E0226 03:25:52.003009   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/flannel-722000/client.crt: no such file or directory
E0226 03:25:57.124407   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/flannel-722000/client.crt: no such file or directory
E0226 03:26:05.658126   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kindnet-722000/client.crt: no such file or directory
E0226 03:26:07.364707   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/flannel-722000/client.crt: no such file or directory
E0226 03:26:07.539204   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/enable-default-cni-722000/client.crt: no such file or directory
E0226 03:26:07.544676   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/enable-default-cni-722000/client.crt: no such file or directory
E0226 03:26:07.555176   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/enable-default-cni-722000/client.crt: no such file or directory
E0226 03:26:07.576543   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/enable-default-cni-722000/client.crt: no such file or directory
E0226 03:26:07.616966   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/enable-default-cni-722000/client.crt: no such file or directory
E0226 03:26:07.697183   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/enable-default-cni-722000/client.crt: no such file or directory
E0226 03:26:07.858506   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/enable-default-cni-722000/client.crt: no such file or directory
E0226 03:26:08.179007   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/enable-default-cni-722000/client.crt: no such file or directory
E0226 03:26:08.819324   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/enable-default-cni-722000/client.crt: no such file or directory
E0226 03:26:10.099506   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/enable-default-cni-722000/client.crt: no such file or directory
E0226 03:26:12.660416   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/enable-default-cni-722000/client.crt: no such file or directory
E0226 03:26:17.781159   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/enable-default-cni-722000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-326000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m18.640931809s)

                                                
                                                
-- stdout --
	* [old-k8s-version-326000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18222
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18222-9538/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18222-9538/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-326000 in cluster old-k8s-version-326000
	* Pulling base image v0.0.42-1708008208-17936 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0226 03:25:37.769475   25643 out.go:291] Setting OutFile to fd 1 ...
	I0226 03:25:37.769749   25643 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 03:25:37.769755   25643 out.go:304] Setting ErrFile to fd 2...
	I0226 03:25:37.769758   25643 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 03:25:37.770010   25643 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18222-9538/.minikube/bin
	I0226 03:25:37.772248   25643 out.go:298] Setting JSON to false
	I0226 03:25:37.798753   25643 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":12308,"bootTime":1708934429,"procs":448,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0226 03:25:37.798846   25643 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0226 03:25:37.822175   25643 out.go:177] * [old-k8s-version-326000] minikube v1.32.0 on Darwin 14.3.1
	I0226 03:25:37.881228   25643 out.go:177]   - MINIKUBE_LOCATION=18222
	I0226 03:25:37.860242   25643 notify.go:220] Checking for updates...
	I0226 03:25:37.940122   25643 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18222-9538/kubeconfig
	I0226 03:25:37.999276   25643 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0226 03:25:38.058207   25643 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 03:25:38.132002   25643 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18222-9538/.minikube
	I0226 03:25:38.207049   25643 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0226 03:25:38.244711   25643 config.go:182] Loaded profile config "calico-722000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0226 03:25:38.244802   25643 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 03:25:38.300356   25643 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0226 03:25:38.300673   25643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 03:25:38.404560   25643 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-26 11:25:38.393629526 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0226 03:25:38.449376   25643 out.go:177] * Using the docker driver based on user configuration
	I0226 03:25:38.470274   25643 start.go:299] selected driver: docker
	I0226 03:25:38.470291   25643 start.go:903] validating driver "docker" against <nil>
	I0226 03:25:38.470300   25643 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0226 03:25:38.475160   25643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 03:25:38.592677   25643 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-26 11:25:38.580813388 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0226 03:25:38.592939   25643 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0226 03:25:38.593134   25643 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0226 03:25:38.632831   25643 out.go:177] * Using Docker Desktop driver with root privileges
	I0226 03:25:38.668556   25643 cni.go:84] Creating CNI manager for ""
	I0226 03:25:38.668585   25643 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0226 03:25:38.668595   25643 start_flags.go:323] config:
	{Name:old-k8s-version-326000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-326000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 03:25:38.689664   25643 out.go:177] * Starting control plane node old-k8s-version-326000 in cluster old-k8s-version-326000
	I0226 03:25:38.763850   25643 cache.go:121] Beginning downloading kic base image for docker with docker
	I0226 03:25:38.784699   25643 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0226 03:25:38.826628   25643 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0226 03:25:38.826664   25643 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0226 03:25:38.826686   25643 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0226 03:25:38.826698   25643 cache.go:56] Caching tarball of preloaded images
	I0226 03:25:38.826880   25643 preload.go:174] Found /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0226 03:25:38.826896   25643 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0226 03:25:38.827530   25643 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/config.json ...
	I0226 03:25:38.827698   25643 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/config.json: {Name:mk2fc74782874a86ba5a3dce94f94de712abb39f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:25:38.883362   25643 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0226 03:25:38.883417   25643 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0226 03:25:38.883438   25643 cache.go:194] Successfully downloaded all kic artifacts
	I0226 03:25:38.883496   25643 start.go:365] acquiring machines lock for old-k8s-version-326000: {Name:mk2beedabea14e6b62e464a057af5ed4bd127b3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0226 03:25:38.883657   25643 start.go:369] acquired machines lock for "old-k8s-version-326000" in 147.201µs
	I0226 03:25:38.883685   25643 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-326000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-326000 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0226 03:25:38.883772   25643 start.go:125] createHost starting for "" (driver="docker")
	I0226 03:25:38.905737   25643 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0226 03:25:38.906048   25643 start.go:159] libmachine.API.Create for "old-k8s-version-326000" (driver="docker")
	I0226 03:25:38.906097   25643 client.go:168] LocalClient.Create starting
	I0226 03:25:38.906217   25643 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem
	I0226 03:25:38.906266   25643 main.go:141] libmachine: Decoding PEM data...
	I0226 03:25:38.906286   25643 main.go:141] libmachine: Parsing certificate...
	I0226 03:25:38.906350   25643 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/cert.pem
	I0226 03:25:38.906385   25643 main.go:141] libmachine: Decoding PEM data...
	I0226 03:25:38.906392   25643 main.go:141] libmachine: Parsing certificate...
	I0226 03:25:38.906851   25643 cli_runner.go:164] Run: docker network inspect old-k8s-version-326000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0226 03:25:38.963168   25643 cli_runner.go:211] docker network inspect old-k8s-version-326000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0226 03:25:38.963277   25643 network_create.go:281] running [docker network inspect old-k8s-version-326000] to gather additional debugging logs...
	I0226 03:25:38.963298   25643 cli_runner.go:164] Run: docker network inspect old-k8s-version-326000
	W0226 03:25:39.015518   25643 cli_runner.go:211] docker network inspect old-k8s-version-326000 returned with exit code 1
	I0226 03:25:39.015555   25643 network_create.go:284] error running [docker network inspect old-k8s-version-326000]: docker network inspect old-k8s-version-326000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-326000 not found
	I0226 03:25:39.015566   25643 network_create.go:286] output of [docker network inspect old-k8s-version-326000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-326000 not found
	
	** /stderr **
	I0226 03:25:39.015701   25643 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0226 03:25:39.070889   25643 network.go:210] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0226 03:25:39.072512   25643 network.go:210] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0226 03:25:39.073861   25643 network.go:210] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0226 03:25:39.075186   25643 network.go:210] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0226 03:25:39.075558   25643 network.go:207] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0024083f0}
	I0226 03:25:39.075576   25643 network_create.go:124] attempt to create docker network old-k8s-version-326000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0226 03:25:39.075647   25643 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-326000 old-k8s-version-326000
	I0226 03:25:39.162849   25643 network_create.go:108] docker network old-k8s-version-326000 192.168.85.0/24 created
	I0226 03:25:39.162895   25643 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-326000" container
	I0226 03:25:39.162998   25643 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0226 03:25:39.213222   25643 cli_runner.go:164] Run: docker volume create old-k8s-version-326000 --label name.minikube.sigs.k8s.io=old-k8s-version-326000 --label created_by.minikube.sigs.k8s.io=true
	I0226 03:25:39.263456   25643 oci.go:103] Successfully created a docker volume old-k8s-version-326000
	I0226 03:25:39.263568   25643 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-326000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-326000 --entrypoint /usr/bin/test -v old-k8s-version-326000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib
	I0226 03:25:39.718840   25643 oci.go:107] Successfully prepared a docker volume old-k8s-version-326000
	I0226 03:25:39.718929   25643 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0226 03:25:39.718960   25643 kic.go:194] Starting extracting preloaded images to volume ...
	I0226 03:25:39.719110   25643 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-326000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir
	I0226 03:25:41.783454   25643 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-326000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir: (2.064261991s)
	I0226 03:25:41.783491   25643 kic.go:203] duration metric: took 2.064524 seconds to extract preloaded images to volume
	I0226 03:25:41.783615   25643 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0226 03:25:41.901528   25643 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-326000 --name old-k8s-version-326000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-326000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-326000 --network old-k8s-version-326000 --ip 192.168.85.2 --volume old-k8s-version-326000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf
	I0226 03:25:42.207089   25643 cli_runner.go:164] Run: docker container inspect old-k8s-version-326000 --format={{.State.Running}}
	I0226 03:25:42.270013   25643 cli_runner.go:164] Run: docker container inspect old-k8s-version-326000 --format={{.State.Status}}
	I0226 03:25:42.327912   25643 cli_runner.go:164] Run: docker exec old-k8s-version-326000 stat /var/lib/dpkg/alternatives/iptables
	I0226 03:25:42.441698   25643 oci.go:144] the created container "old-k8s-version-326000" has a running status.
	I0226 03:25:42.441741   25643 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/old-k8s-version-326000/id_rsa...
	I0226 03:25:42.523687   25643 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/old-k8s-version-326000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0226 03:25:42.588459   25643 cli_runner.go:164] Run: docker container inspect old-k8s-version-326000 --format={{.State.Status}}
	I0226 03:25:42.642547   25643 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0226 03:25:42.642569   25643 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-326000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0226 03:25:42.744930   25643 cli_runner.go:164] Run: docker container inspect old-k8s-version-326000 --format={{.State.Status}}
	I0226 03:25:42.810539   25643 machine.go:88] provisioning docker machine ...
	I0226 03:25:42.810598   25643 ubuntu.go:169] provisioning hostname "old-k8s-version-326000"
	I0226 03:25:42.810703   25643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-326000
	I0226 03:25:42.862134   25643 main.go:141] libmachine: Using SSH client type: native
	I0226 03:25:42.862350   25643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd19b920] 0xd19e680 <nil>  [] 0s} 127.0.0.1 61670 <nil> <nil>}
	I0226 03:25:42.862362   25643 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-326000 && echo "old-k8s-version-326000" | sudo tee /etc/hostname
	I0226 03:25:43.030542   25643 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-326000
	
	I0226 03:25:43.030635   25643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-326000
	I0226 03:25:43.088336   25643 main.go:141] libmachine: Using SSH client type: native
	I0226 03:25:43.088525   25643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd19b920] 0xd19e680 <nil>  [] 0s} 127.0.0.1 61670 <nil> <nil>}
	I0226 03:25:43.088541   25643 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-326000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-326000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-326000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0226 03:25:43.227422   25643 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0226 03:25:43.227449   25643 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/18222-9538/.minikube CaCertPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18222-9538/.minikube}
	I0226 03:25:43.227478   25643 ubuntu.go:177] setting up certificates
	I0226 03:25:43.227484   25643 provision.go:83] configureAuth start
	I0226 03:25:43.227562   25643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-326000
	I0226 03:25:43.285063   25643 provision.go:138] copyHostCerts
	I0226 03:25:43.285182   25643 exec_runner.go:144] found /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.pem, removing ...
	I0226 03:25:43.285218   25643 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.pem
	I0226 03:25:43.285429   25643 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.pem (1082 bytes)
	I0226 03:25:43.285663   25643 exec_runner.go:144] found /Users/jenkins/minikube-integration/18222-9538/.minikube/cert.pem, removing ...
	I0226 03:25:43.285671   25643 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18222-9538/.minikube/cert.pem
	I0226 03:25:43.285906   25643 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18222-9538/.minikube/cert.pem (1123 bytes)
	I0226 03:25:43.286116   25643 exec_runner.go:144] found /Users/jenkins/minikube-integration/18222-9538/.minikube/key.pem, removing ...
	I0226 03:25:43.286123   25643 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18222-9538/.minikube/key.pem
	I0226 03:25:43.286208   25643 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18222-9538/.minikube/key.pem (1675 bytes)
	I0226 03:25:43.286359   25643 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-326000 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-326000]
	I0226 03:25:43.381273   25643 provision.go:172] copyRemoteCerts
	I0226 03:25:43.381361   25643 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0226 03:25:43.381450   25643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-326000
	I0226 03:25:43.434391   25643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61670 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/old-k8s-version-326000/id_rsa Username:docker}
	I0226 03:25:43.539450   25643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0226 03:25:43.580147   25643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0226 03:25:43.620399   25643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0226 03:25:43.661190   25643 provision.go:86] duration metric: configureAuth took 433.686375ms
	I0226 03:25:43.661209   25643 ubuntu.go:193] setting minikube options for container-runtime
	I0226 03:25:43.661382   25643 config.go:182] Loaded profile config "old-k8s-version-326000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0226 03:25:43.661456   25643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-326000
	I0226 03:25:43.712346   25643 main.go:141] libmachine: Using SSH client type: native
	I0226 03:25:43.712509   25643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd19b920] 0xd19e680 <nil>  [] 0s} 127.0.0.1 61670 <nil> <nil>}
	I0226 03:25:43.712526   25643 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0226 03:25:43.849947   25643 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0226 03:25:43.849959   25643 ubuntu.go:71] root file system type: overlay
	I0226 03:25:43.850031   25643 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0226 03:25:43.850115   25643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-326000
	I0226 03:25:43.901021   25643 main.go:141] libmachine: Using SSH client type: native
	I0226 03:25:43.901207   25643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd19b920] 0xd19e680 <nil>  [] 0s} 127.0.0.1 61670 <nil> <nil>}
	I0226 03:25:43.901257   25643 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0226 03:25:44.063904   25643 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0226 03:25:44.064037   25643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-326000
	I0226 03:25:44.116096   25643 main.go:141] libmachine: Using SSH client type: native
	I0226 03:25:44.116289   25643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd19b920] 0xd19e680 <nil>  [] 0s} 127.0.0.1 61670 <nil> <nil>}
	I0226 03:25:44.116302   25643 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0226 03:25:44.787608   25643 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-02-06 21:12:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-26 11:25:44.057771063 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0226 03:25:44.787632   25643 machine.go:91] provisioned docker machine in 1.977047677s
	I0226 03:25:44.787644   25643 client.go:171] LocalClient.Create took 5.881499942s
	I0226 03:25:44.787662   25643 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-326000" took 5.881580726s
	I0226 03:25:44.787670   25643 start.go:300] post-start starting for "old-k8s-version-326000" (driver="docker")
	I0226 03:25:44.787678   25643 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0226 03:25:44.787740   25643 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0226 03:25:44.787808   25643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-326000
	I0226 03:25:44.838905   25643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61670 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/old-k8s-version-326000/id_rsa Username:docker}
	I0226 03:25:44.940436   25643 ssh_runner.go:195] Run: cat /etc/os-release
	I0226 03:25:44.944438   25643 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0226 03:25:44.944460   25643 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0226 03:25:44.944467   25643 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0226 03:25:44.944474   25643 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0226 03:25:44.944484   25643 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18222-9538/.minikube/addons for local assets ...
	I0226 03:25:44.944589   25643 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18222-9538/.minikube/files for local assets ...
	I0226 03:25:44.944790   25643 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem -> 100262.pem in /etc/ssl/certs
	I0226 03:25:44.944997   25643 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0226 03:25:44.959940   25643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem --> /etc/ssl/certs/100262.pem (1708 bytes)
	I0226 03:25:45.000637   25643 start.go:303] post-start completed in 212.956114ms
	I0226 03:25:45.001237   25643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-326000
	I0226 03:25:45.051926   25643 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/config.json ...
	I0226 03:25:45.052424   25643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0226 03:25:45.052488   25643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-326000
	I0226 03:25:45.103023   25643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61670 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/old-k8s-version-326000/id_rsa Username:docker}
	I0226 03:25:45.194939   25643 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0226 03:25:45.201680   25643 start.go:128] duration metric: createHost completed in 6.317849216s
	I0226 03:25:45.201704   25643 start.go:83] releasing machines lock for "old-k8s-version-326000", held for 6.318002801s
	I0226 03:25:45.201835   25643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-326000
	I0226 03:25:45.259305   25643 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0226 03:25:45.259314   25643 ssh_runner.go:195] Run: cat /version.json
	I0226 03:25:45.259396   25643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-326000
	I0226 03:25:45.259409   25643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-326000
	I0226 03:25:45.319315   25643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61670 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/old-k8s-version-326000/id_rsa Username:docker}
	I0226 03:25:45.319305   25643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61670 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/old-k8s-version-326000/id_rsa Username:docker}
	I0226 03:25:45.411951   25643 ssh_runner.go:195] Run: systemctl --version
	I0226 03:25:45.524624   25643 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0226 03:25:45.531017   25643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0226 03:25:45.580024   25643 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0226 03:25:45.580119   25643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0226 03:25:45.608786   25643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0226 03:25:45.641294   25643 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0226 03:25:45.641315   25643 start.go:475] detecting cgroup driver to use...
	I0226 03:25:45.641350   25643 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0226 03:25:45.641542   25643 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0226 03:25:45.672646   25643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0226 03:25:45.690593   25643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0226 03:25:45.707046   25643 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0226 03:25:45.707109   25643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0226 03:25:45.723177   25643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0226 03:25:45.739267   25643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0226 03:25:45.755337   25643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0226 03:25:45.771298   25643 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0226 03:25:45.787697   25643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0226 03:25:45.804513   25643 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0226 03:25:45.819540   25643 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0226 03:25:45.834597   25643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 03:25:45.897490   25643 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0226 03:25:45.988156   25643 start.go:475] detecting cgroup driver to use...
	I0226 03:25:45.988184   25643 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0226 03:25:45.988255   25643 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0226 03:25:46.011182   25643 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0226 03:25:46.011248   25643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0226 03:25:46.030202   25643 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0226 03:25:46.060565   25643 ssh_runner.go:195] Run: which cri-dockerd
	I0226 03:25:46.065179   25643 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0226 03:25:46.081616   25643 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0226 03:25:46.112565   25643 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0226 03:25:46.176244   25643 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0226 03:25:46.269773   25643 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0226 03:25:46.269874   25643 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0226 03:25:46.302031   25643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 03:25:46.372906   25643 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0226 03:25:46.667012   25643 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0226 03:25:46.693486   25643 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0226 03:25:46.762639   25643 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	I0226 03:25:46.762747   25643 cli_runner.go:164] Run: docker exec -t old-k8s-version-326000 dig +short host.docker.internal
	I0226 03:25:46.870855   25643 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0226 03:25:46.870966   25643 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0226 03:25:46.875760   25643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 03:25:46.893250   25643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-326000
	I0226 03:25:46.944077   25643 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0226 03:25:46.944155   25643 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0226 03:25:46.963743   25643 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0226 03:25:46.963756   25643 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0226 03:25:46.963816   25643 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0226 03:25:46.979453   25643 ssh_runner.go:195] Run: which lz4
	I0226 03:25:46.983845   25643 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0226 03:25:46.988157   25643 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0226 03:25:46.988179   25643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0226 03:25:54.040766   25643 docker.go:649] Took 7.056919 seconds to copy over tarball
	I0226 03:25:54.040879   25643 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0226 03:25:55.908537   25643 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.867613134s)
	I0226 03:25:55.908557   25643 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0226 03:25:55.968541   25643 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0226 03:25:55.985397   25643 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0226 03:25:56.013942   25643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 03:25:56.099456   25643 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0226 03:25:56.623300   25643 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0226 03:25:56.650170   25643 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0226 03:25:56.650194   25643 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0226 03:25:56.650203   25643 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0226 03:25:56.657023   25643 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 03:25:56.657816   25643 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0226 03:25:56.657874   25643 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0226 03:25:56.658232   25643 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 03:25:56.659763   25643 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0226 03:25:56.660771   25643 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0226 03:25:56.661353   25643 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0226 03:25:56.662055   25643 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0226 03:25:56.663547   25643 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0226 03:25:56.665562   25643 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 03:25:56.665807   25643 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0226 03:25:56.666134   25643 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 03:25:56.667030   25643 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0226 03:25:56.668582   25643 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0226 03:25:56.668732   25643 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0226 03:25:56.668924   25643 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0226 03:25:58.642598   25643 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 03:25:58.668329   25643 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0226 03:25:58.668405   25643 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 03:25:58.668480   25643 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 03:25:58.686776   25643 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0226 03:25:58.723429   25643 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0226 03:25:58.746924   25643 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0226 03:25:58.746960   25643 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0226 03:25:58.747045   25643 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0226 03:25:58.768185   25643 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0226 03:25:58.772139   25643 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0226 03:25:58.773379   25643 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0226 03:25:58.785061   25643 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0226 03:25:58.790109   25643 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0226 03:25:58.790671   25643 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0226 03:25:58.790700   25643 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0226 03:25:58.790773   25643 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0226 03:25:58.793168   25643 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0226 03:25:58.793201   25643 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0226 03:25:58.793288   25643 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0226 03:25:58.802113   25643 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0226 03:25:58.805133   25643 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0226 03:25:58.805171   25643 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0226 03:25:58.805278   25643 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0226 03:25:58.810167   25643 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0226 03:25:58.812144   25643 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0226 03:25:58.812172   25643 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0226 03:25:58.812237   25643 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0226 03:25:58.813286   25643 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0226 03:25:58.827158   25643 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0226 03:25:58.827194   25643 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0226 03:25:58.827313   25643 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0226 03:25:58.831834   25643 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0226 03:25:58.840306   25643 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0226 03:25:58.849042   25643 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0226 03:25:59.294659   25643 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 03:25:59.314598   25643 cache_images.go:92] LoadImages completed in 2.664364845s
	W0226 03:25:59.314660   25643 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0: no such file or directory
	I0226 03:25:59.314742   25643 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0226 03:25:59.397512   25643 cni.go:84] Creating CNI manager for ""
	I0226 03:25:59.397531   25643 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0226 03:25:59.397554   25643 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0226 03:25:59.397568   25643 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-326000 NodeName:old-k8s-version-326000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0226 03:25:59.397663   25643 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-326000"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-326000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.85.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0226 03:25:59.397717   25643 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-326000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-326000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0226 03:25:59.397782   25643 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0226 03:25:59.415350   25643 binaries.go:44] Found k8s binaries, skipping transfer
	I0226 03:25:59.415420   25643 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0226 03:25:59.432436   25643 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0226 03:25:59.465506   25643 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0226 03:25:59.498051   25643 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0226 03:25:59.529173   25643 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0226 03:25:59.534179   25643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 03:25:59.554049   25643 certs.go:56] Setting up /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000 for IP: 192.168.85.2
	I0226 03:25:59.554072   25643 certs.go:190] acquiring lock for shared ca certs: {Name:mkac1efdcc7c5f1039385f86b148562f7ea05475 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:25:59.554255   25643 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.key
	I0226 03:25:59.554324   25643 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18222-9538/.minikube/proxy-client-ca.key
	I0226 03:25:59.554378   25643 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/client.key
	I0226 03:25:59.554392   25643 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/client.crt with IP's: []
	I0226 03:25:59.641018   25643 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/client.crt ...
	I0226 03:25:59.641046   25643 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/client.crt: {Name:mkd8d58c715fb6140ead109c517b1f913ec1ff04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:25:59.641414   25643 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/client.key ...
	I0226 03:25:59.641425   25643 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/client.key: {Name:mke6c74d38a22b87b281684e14038b0affc2602a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:25:59.641666   25643 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/apiserver.key.43b9df8c
	I0226 03:25:59.641681   25643 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0226 03:25:59.796843   25643 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/apiserver.crt.43b9df8c ...
	I0226 03:25:59.796859   25643 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/apiserver.crt.43b9df8c: {Name:mk53d3613018eabf65c71bfb7b6378f91dd043d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:25:59.797162   25643 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/apiserver.key.43b9df8c ...
	I0226 03:25:59.797171   25643 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/apiserver.key.43b9df8c: {Name:mke32147866c154fd6b1e35286cf1e6a215c216e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:25:59.797386   25643 certs.go:337] copying /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/apiserver.crt.43b9df8c -> /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/apiserver.crt
	I0226 03:25:59.797564   25643 certs.go:341] copying /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/apiserver.key.43b9df8c -> /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/apiserver.key
	I0226 03:25:59.797726   25643 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/proxy-client.key
	I0226 03:25:59.797746   25643 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/proxy-client.crt with IP's: []
	I0226 03:25:59.926004   25643 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/proxy-client.crt ...
	I0226 03:25:59.926028   25643 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/proxy-client.crt: {Name:mkf19141b6c0713c614eb8b9c7324199e2395558 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:25:59.926439   25643 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/proxy-client.key ...
	I0226 03:25:59.926453   25643 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/proxy-client.key: {Name:mkd3f5a85139444f4ce4c694bf88425c69080eef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:25:59.927009   25643 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/10026.pem (1338 bytes)
	W0226 03:25:59.927106   25643 certs.go:433] ignoring /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/10026_empty.pem, impossibly tiny 0 bytes
	I0226 03:25:59.927139   25643 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca-key.pem (1675 bytes)
	I0226 03:25:59.927197   25643 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem (1082 bytes)
	I0226 03:25:59.927250   25643 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/cert.pem (1123 bytes)
	I0226 03:25:59.927315   25643 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/key.pem (1675 bytes)
	I0226 03:25:59.927445   25643 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem (1708 bytes)
	I0226 03:25:59.928102   25643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0226 03:25:59.975796   25643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0226 03:26:00.016136   25643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0226 03:26:00.064820   25643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0226 03:26:00.111480   25643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0226 03:26:00.161048   25643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0226 03:26:00.207224   25643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0226 03:26:00.256025   25643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0226 03:26:00.301251   25643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/10026.pem --> /usr/share/ca-certificates/10026.pem (1338 bytes)
	I0226 03:26:00.349163   25643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem --> /usr/share/ca-certificates/100262.pem (1708 bytes)
	I0226 03:26:00.394164   25643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0226 03:26:00.446244   25643 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0226 03:26:00.483499   25643 ssh_runner.go:195] Run: openssl version
	I0226 03:26:00.489684   25643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100262.pem && ln -fs /usr/share/ca-certificates/100262.pem /etc/ssl/certs/100262.pem"
	I0226 03:26:00.505612   25643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100262.pem
	I0226 03:26:00.509941   25643 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 26 10:36 /usr/share/ca-certificates/100262.pem
	I0226 03:26:00.509989   25643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100262.pem
	I0226 03:26:00.517577   25643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100262.pem /etc/ssl/certs/3ec20f2e.0"
	I0226 03:26:00.538524   25643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0226 03:26:00.560946   25643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0226 03:26:00.565430   25643 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 26 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0226 03:26:00.565484   25643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0226 03:26:00.572499   25643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0226 03:26:00.588415   25643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10026.pem && ln -fs /usr/share/ca-certificates/10026.pem /etc/ssl/certs/10026.pem"
	I0226 03:26:00.604240   25643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10026.pem
	I0226 03:26:00.608504   25643 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 26 10:36 /usr/share/ca-certificates/10026.pem
	I0226 03:26:00.608553   25643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10026.pem
	I0226 03:26:00.615227   25643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10026.pem /etc/ssl/certs/51391683.0"
	I0226 03:26:00.633976   25643 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0226 03:26:00.640363   25643 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0226 03:26:00.640429   25643 kubeadm.go:404] StartCluster: {Name:old-k8s-version-326000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-326000 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 03:26:00.640563   25643 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0226 03:26:00.660644   25643 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0226 03:26:00.677509   25643 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0226 03:26:00.694327   25643 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0226 03:26:00.694388   25643 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0226 03:26:00.710810   25643 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0226 03:26:00.710841   25643 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0226 03:26:00.773444   25643 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0226 03:26:00.773544   25643 kubeadm.go:322] [preflight] Running pre-flight checks
	I0226 03:26:01.068490   25643 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0226 03:26:01.068649   25643 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0226 03:26:01.068822   25643 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0226 03:26:01.256642   25643 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0226 03:26:01.257819   25643 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0226 03:26:01.265816   25643 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0226 03:26:01.335998   25643 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0226 03:26:01.402735   25643 out.go:204]   - Generating certificates and keys ...
	I0226 03:26:01.402825   25643 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0226 03:26:01.402905   25643 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0226 03:26:01.553767   25643 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0226 03:26:01.678075   25643 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0226 03:26:01.824335   25643 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0226 03:26:01.918572   25643 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0226 03:26:02.298263   25643 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0226 03:26:02.298427   25643 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-326000 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0226 03:26:02.449621   25643 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0226 03:26:02.449746   25643 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-326000 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0226 03:26:02.669496   25643 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0226 03:26:02.870626   25643 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0226 03:26:03.141995   25643 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0226 03:26:03.142099   25643 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0226 03:26:03.381603   25643 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0226 03:26:03.603759   25643 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0226 03:26:03.780059   25643 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0226 03:26:03.830451   25643 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0226 03:26:03.830857   25643 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0226 03:26:03.875185   25643 out.go:204]   - Booting up control plane ...
	I0226 03:26:03.875340   25643 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0226 03:26:03.875489   25643 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0226 03:26:03.875621   25643 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0226 03:26:03.875767   25643 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0226 03:26:03.876024   25643 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0226 03:26:43.846851   25643 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0226 03:26:43.847008   25643 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:26:43.847224   25643 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:26:48.848685   25643 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:26:48.848855   25643 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:26:58.850876   25643 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:26:58.851038   25643 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:27:18.852557   25643 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:27:18.852728   25643 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:27:58.854656   25643 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:27:58.854846   25643 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:27:58.854869   25643 kubeadm.go:322] 
	I0226 03:27:58.854903   25643 kubeadm.go:322] Unfortunately, an error has occurred:
	I0226 03:27:58.854933   25643 kubeadm.go:322] 	timed out waiting for the condition
	I0226 03:27:58.854939   25643 kubeadm.go:322] 
	I0226 03:27:58.854978   25643 kubeadm.go:322] This error is likely caused by:
	I0226 03:27:58.855015   25643 kubeadm.go:322] 	- The kubelet is not running
	I0226 03:27:58.855096   25643 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0226 03:27:58.855104   25643 kubeadm.go:322] 
	I0226 03:27:58.855204   25643 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0226 03:27:58.855240   25643 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0226 03:27:58.855269   25643 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0226 03:27:58.855281   25643 kubeadm.go:322] 
	I0226 03:27:58.855375   25643 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0226 03:27:58.855453   25643 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0226 03:27:58.855527   25643 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0226 03:27:58.855569   25643 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0226 03:27:58.855633   25643 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0226 03:27:58.855662   25643 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0226 03:27:58.859589   25643 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0226 03:27:58.859672   25643 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0226 03:27:58.859790   25643 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0226 03:27:58.859884   25643 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0226 03:27:58.859967   25643 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0226 03:27:58.860053   25643 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0226 03:27:58.860139   25643 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-326000 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-326000 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-326000 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-326000 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0226 03:27:58.860186   25643 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0226 03:27:59.286743   25643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 03:27:59.304109   25643 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0226 03:27:59.304162   25643 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0226 03:27:59.319319   25643 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0226 03:27:59.319343   25643 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0226 03:27:59.379367   25643 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0226 03:27:59.379408   25643 kubeadm.go:322] [preflight] Running pre-flight checks
	I0226 03:27:59.660953   25643 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0226 03:27:59.661039   25643 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0226 03:27:59.661116   25643 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0226 03:27:59.833162   25643 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0226 03:27:59.833797   25643 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0226 03:27:59.840267   25643 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0226 03:27:59.897619   25643 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0226 03:27:59.919208   25643 out.go:204]   - Generating certificates and keys ...
	I0226 03:27:59.919275   25643 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0226 03:27:59.919352   25643 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0226 03:27:59.919416   25643 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0226 03:27:59.919461   25643 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0226 03:27:59.919549   25643 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0226 03:27:59.919619   25643 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0226 03:27:59.919687   25643 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0226 03:27:59.919740   25643 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0226 03:27:59.919800   25643 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0226 03:27:59.919848   25643 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0226 03:27:59.919884   25643 kubeadm.go:322] [certs] Using the existing "sa" key
	I0226 03:27:59.919932   25643 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0226 03:28:00.005823   25643 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0226 03:28:00.148730   25643 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0226 03:28:00.346066   25643 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0226 03:28:00.763315   25643 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0226 03:28:00.763817   25643 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0226 03:28:00.785382   25643 out.go:204]   - Booting up control plane ...
	I0226 03:28:00.785526   25643 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0226 03:28:00.785745   25643 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0226 03:28:00.785857   25643 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0226 03:28:00.785977   25643 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0226 03:28:00.786167   25643 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0226 03:28:40.773217   25643 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0226 03:28:40.773753   25643 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:28:40.773905   25643 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:28:45.774975   25643 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:28:45.775198   25643 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:28:55.776836   25643 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:28:55.777145   25643 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:29:15.778721   25643 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:29:15.778944   25643 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:29:55.779624   25643 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:29:55.779794   25643 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:29:55.779805   25643 kubeadm.go:322] 
	I0226 03:29:55.779835   25643 kubeadm.go:322] Unfortunately, an error has occurred:
	I0226 03:29:55.779867   25643 kubeadm.go:322] 	timed out waiting for the condition
	I0226 03:29:55.779873   25643 kubeadm.go:322] 
	I0226 03:29:55.779909   25643 kubeadm.go:322] This error is likely caused by:
	I0226 03:29:55.779943   25643 kubeadm.go:322] 	- The kubelet is not running
	I0226 03:29:55.780022   25643 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0226 03:29:55.780029   25643 kubeadm.go:322] 
	I0226 03:29:55.780107   25643 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0226 03:29:55.780135   25643 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0226 03:29:55.780178   25643 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0226 03:29:55.780188   25643 kubeadm.go:322] 
	I0226 03:29:55.780309   25643 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0226 03:29:55.780395   25643 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0226 03:29:55.780475   25643 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0226 03:29:55.780514   25643 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0226 03:29:55.780571   25643 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0226 03:29:55.780607   25643 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0226 03:29:55.784514   25643 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0226 03:29:55.784581   25643 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0226 03:29:55.784688   25643 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0226 03:29:55.784781   25643 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0226 03:29:55.784858   25643 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0226 03:29:55.784919   25643 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0226 03:29:55.784953   25643 kubeadm.go:406] StartCluster complete in 3m55.143193105s
	I0226 03:29:55.785036   25643 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:29:55.800889   25643 logs.go:276] 0 containers: []
	W0226 03:29:55.800902   25643 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:29:55.800971   25643 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:29:55.817435   25643 logs.go:276] 0 containers: []
	W0226 03:29:55.817449   25643 logs.go:278] No container was found matching "etcd"
	I0226 03:29:55.817518   25643 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:29:55.834181   25643 logs.go:276] 0 containers: []
	W0226 03:29:55.834196   25643 logs.go:278] No container was found matching "coredns"
	I0226 03:29:55.834274   25643 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:29:55.853280   25643 logs.go:276] 0 containers: []
	W0226 03:29:55.853295   25643 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:29:55.853389   25643 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:29:55.870671   25643 logs.go:276] 0 containers: []
	W0226 03:29:55.870685   25643 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:29:55.870750   25643 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:29:55.887818   25643 logs.go:276] 0 containers: []
	W0226 03:29:55.887834   25643 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:29:55.887921   25643 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:29:55.906026   25643 logs.go:276] 0 containers: []
	W0226 03:29:55.906041   25643 logs.go:278] No container was found matching "kindnet"
	I0226 03:29:55.906049   25643 logs.go:123] Gathering logs for Docker ...
	I0226 03:29:55.906057   25643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:29:55.931805   25643 logs.go:123] Gathering logs for container status ...
	I0226 03:29:55.931819   25643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:29:55.990788   25643 logs.go:123] Gathering logs for kubelet ...
	I0226 03:29:55.990801   25643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:29:56.031854   25643 logs.go:123] Gathering logs for dmesg ...
	I0226 03:29:56.031870   25643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:29:56.052013   25643 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:29:56.052028   25643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:29:56.129959   25643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0226 03:29:56.129987   25643 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0226 03:29:56.130001   25643 out.go:239] * 
	* 
	W0226 03:29:56.130042   25643 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0226 03:29:56.130057   25643 out.go:239] * 
	* 
	W0226 03:29:56.130721   25643 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0226 03:29:56.195252   25643 out.go:177] 
	W0226 03:29:56.239400   25643 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0226 03:29:56.239448   25643 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0226 03:29:56.239468   25643 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0226 03:29:56.283439   25643 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-326000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-326000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-326000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b",
	        "Created": "2024-02-26T11:25:41.957182514Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 362474,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-26T11:25:42.199375659Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b/hostname",
	        "HostsPath": "/var/lib/docker/containers/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b/hosts",
	        "LogPath": "/var/lib/docker/containers/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b-json.log",
	        "Name": "/old-k8s-version-326000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-326000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-326000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/60f3f0a1425652d22ad9176e8be35f0bcc96402895ea787bafbfdf32dd7af953-init/diff:/var/lib/docker/overlay2/8bb839173c154892efba77c6399a35a6f861ea09086927d7a3ace9b08c2c0425/diff",
	                "MergedDir": "/var/lib/docker/overlay2/60f3f0a1425652d22ad9176e8be35f0bcc96402895ea787bafbfdf32dd7af953/merged",
	                "UpperDir": "/var/lib/docker/overlay2/60f3f0a1425652d22ad9176e8be35f0bcc96402895ea787bafbfdf32dd7af953/diff",
	                "WorkDir": "/var/lib/docker/overlay2/60f3f0a1425652d22ad9176e8be35f0bcc96402895ea787bafbfdf32dd7af953/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-326000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-326000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-326000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-326000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-326000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fe61ab8db2f03058a0ce9593ef2c3320e14dee5741d4c57f433e0109d3f55670",
	            "SandboxKey": "/var/run/docker/netns/fe61ab8db2f0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61670"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61666"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61667"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61668"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61669"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-326000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "76ad634e3f3f",
	                        "old-k8s-version-326000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "d3b7a706276d3d554def8a4b60d3b2d38626f3b90daf1316a28cf50ae9bb155f",
	                    "EndpointID": "b031a27c1724c8987e54177c03d8b226685ddc850d77afdccebb384c1cf0ebf2",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-326000",
	                        "76ad634e3f3f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000: exit status 6 (448.287149ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0226 03:29:56.876835   26565 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-326000" does not appear in /Users/jenkins/minikube-integration/18222-9538/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-326000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (259.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-326000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-326000 create -f testdata/busybox.yaml: exit status 1 (38.404184ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-326000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-326000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-326000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-326000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b",
	        "Created": "2024-02-26T11:25:41.957182514Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 362474,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-26T11:25:42.199375659Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b/hostname",
	        "HostsPath": "/var/lib/docker/containers/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b/hosts",
	        "LogPath": "/var/lib/docker/containers/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b-json.log",
	        "Name": "/old-k8s-version-326000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-326000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-326000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/60f3f0a1425652d22ad9176e8be35f0bcc96402895ea787bafbfdf32dd7af953-init/diff:/var/lib/docker/overlay2/8bb839173c154892efba77c6399a35a6f861ea09086927d7a3ace9b08c2c0425/diff",
	                "MergedDir": "/var/lib/docker/overlay2/60f3f0a1425652d22ad9176e8be35f0bcc96402895ea787bafbfdf32dd7af953/merged",
	                "UpperDir": "/var/lib/docker/overlay2/60f3f0a1425652d22ad9176e8be35f0bcc96402895ea787bafbfdf32dd7af953/diff",
	                "WorkDir": "/var/lib/docker/overlay2/60f3f0a1425652d22ad9176e8be35f0bcc96402895ea787bafbfdf32dd7af953/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-326000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-326000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-326000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-326000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-326000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fe61ab8db2f03058a0ce9593ef2c3320e14dee5741d4c57f433e0109d3f55670",
	            "SandboxKey": "/var/run/docker/netns/fe61ab8db2f0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61670"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61666"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61667"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61668"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61669"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-326000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "76ad634e3f3f",
	                        "old-k8s-version-326000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "d3b7a706276d3d554def8a4b60d3b2d38626f3b90daf1316a28cf50ae9bb155f",
	                    "EndpointID": "b031a27c1724c8987e54177c03d8b226685ddc850d77afdccebb384c1cf0ebf2",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-326000",
	                        "76ad634e3f3f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000: exit status 6 (399.431602ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0226 03:29:57.410011   26578 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-326000" does not appear in /Users/jenkins/minikube-integration/18222-9538/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-326000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-326000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-326000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b",
	        "Created": "2024-02-26T11:25:41.957182514Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 362474,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-26T11:25:42.199375659Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b/hostname",
	        "HostsPath": "/var/lib/docker/containers/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b/hosts",
	        "LogPath": "/var/lib/docker/containers/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b-json.log",
	        "Name": "/old-k8s-version-326000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-326000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-326000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/60f3f0a1425652d22ad9176e8be35f0bcc96402895ea787bafbfdf32dd7af953-init/diff:/var/lib/docker/overlay2/8bb839173c154892efba77c6399a35a6f861ea09086927d7a3ace9b08c2c0425/diff",
	                "MergedDir": "/var/lib/docker/overlay2/60f3f0a1425652d22ad9176e8be35f0bcc96402895ea787bafbfdf32dd7af953/merged",
	                "UpperDir": "/var/lib/docker/overlay2/60f3f0a1425652d22ad9176e8be35f0bcc96402895ea787bafbfdf32dd7af953/diff",
	                "WorkDir": "/var/lib/docker/overlay2/60f3f0a1425652d22ad9176e8be35f0bcc96402895ea787bafbfdf32dd7af953/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-326000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-326000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-326000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-326000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-326000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fe61ab8db2f03058a0ce9593ef2c3320e14dee5741d4c57f433e0109d3f55670",
	            "SandboxKey": "/var/run/docker/netns/fe61ab8db2f0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61670"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61666"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61667"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61668"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61669"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-326000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "76ad634e3f3f",
	                        "old-k8s-version-326000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "d3b7a706276d3d554def8a4b60d3b2d38626f3b90daf1316a28cf50ae9bb155f",
	                    "EndpointID": "b031a27c1724c8987e54177c03d8b226685ddc850d77afdccebb384c1cf0ebf2",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-326000",
	                        "76ad634e3f3f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000: exit status 6 (459.087118ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0226 03:29:57.875762   26590 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-326000" does not appear in /Users/jenkins/minikube-integration/18222-9538/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-326000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (1.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (99.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-326000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0226 03:29:59.532874   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/false-722000/client.crt: no such file or directory
E0226 03:29:59.538469   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/false-722000/client.crt: no such file or directory
E0226 03:29:59.549320   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/false-722000/client.crt: no such file or directory
E0226 03:29:59.569452   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/false-722000/client.crt: no such file or directory
E0226 03:29:59.609612   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/false-722000/client.crt: no such file or directory
E0226 03:29:59.690370   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/false-722000/client.crt: no such file or directory
E0226 03:29:59.851719   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/false-722000/client.crt: no such file or directory
E0226 03:30:00.172051   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/false-722000/client.crt: no such file or directory
E0226 03:30:00.812367   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/false-722000/client.crt: no such file or directory
E0226 03:30:02.092536   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/false-722000/client.crt: no such file or directory
E0226 03:30:02.680469   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/custom-flannel-722000/client.crt: no such file or directory
E0226 03:30:04.652824   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/false-722000/client.crt: no such file or directory
E0226 03:30:09.773317   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/false-722000/client.crt: no such file or directory
E0226 03:30:11.420481   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kindnet-722000/client.crt: no such file or directory
E0226 03:30:20.013804   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/false-722000/client.crt: no such file or directory
E0226 03:30:40.495964   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/false-722000/client.crt: no such file or directory
E0226 03:30:45.336321   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubenet-722000/client.crt: no such file or directory
E0226 03:30:46.883313   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/flannel-722000/client.crt: no such file or directory
E0226 03:31:07.540020   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/enable-default-cni-722000/client.crt: no such file or directory
E0226 03:31:14.570197   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/flannel-722000/client.crt: no such file or directory
E0226 03:31:21.457257   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/false-722000/client.crt: no such file or directory
E0226 03:31:23.905955   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/calico-722000/client.crt: no such file or directory
E0226 03:31:23.912217   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/calico-722000/client.crt: no such file or directory
E0226 03:31:23.923219   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/calico-722000/client.crt: no such file or directory
E0226 03:31:23.943687   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/calico-722000/client.crt: no such file or directory
E0226 03:31:23.984308   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/calico-722000/client.crt: no such file or directory
E0226 03:31:24.064984   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/calico-722000/client.crt: no such file or directory
E0226 03:31:24.225211   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/calico-722000/client.crt: no such file or directory
E0226 03:31:24.546093   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/calico-722000/client.crt: no such file or directory
E0226 03:31:24.601505   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/custom-flannel-722000/client.crt: no such file or directory
E0226 03:31:25.186859   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/calico-722000/client.crt: no such file or directory
E0226 03:31:25.409990   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/skaffold-341000/client.crt: no such file or directory
E0226 03:31:26.467040   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/calico-722000/client.crt: no such file or directory
E0226 03:31:29.027883   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/calico-722000/client.crt: no such file or directory
E0226 03:31:34.148193   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/calico-722000/client.crt: no such file or directory
E0226 03:31:35.241744   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/enable-default-cni-722000/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-326000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m39.213949006s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-326000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-326000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-326000 describe deploy/metrics-server -n kube-system: exit status 1 (38.912435ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-326000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-326000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-326000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-326000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b",
	        "Created": "2024-02-26T11:25:41.957182514Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 362474,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-26T11:25:42.199375659Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b/hostname",
	        "HostsPath": "/var/lib/docker/containers/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b/hosts",
	        "LogPath": "/var/lib/docker/containers/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b-json.log",
	        "Name": "/old-k8s-version-326000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-326000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-326000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/60f3f0a1425652d22ad9176e8be35f0bcc96402895ea787bafbfdf32dd7af953-init/diff:/var/lib/docker/overlay2/8bb839173c154892efba77c6399a35a6f861ea09086927d7a3ace9b08c2c0425/diff",
	                "MergedDir": "/var/lib/docker/overlay2/60f3f0a1425652d22ad9176e8be35f0bcc96402895ea787bafbfdf32dd7af953/merged",
	                "UpperDir": "/var/lib/docker/overlay2/60f3f0a1425652d22ad9176e8be35f0bcc96402895ea787bafbfdf32dd7af953/diff",
	                "WorkDir": "/var/lib/docker/overlay2/60f3f0a1425652d22ad9176e8be35f0bcc96402895ea787bafbfdf32dd7af953/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-326000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-326000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-326000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-326000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-326000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fe61ab8db2f03058a0ce9593ef2c3320e14dee5741d4c57f433e0109d3f55670",
	            "SandboxKey": "/var/run/docker/netns/fe61ab8db2f0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61670"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61666"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61667"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61668"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61669"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-326000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "76ad634e3f3f",
	                        "old-k8s-version-326000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "d3b7a706276d3d554def8a4b60d3b2d38626f3b90daf1316a28cf50ae9bb155f",
	                    "EndpointID": "b031a27c1724c8987e54177c03d8b226685ddc850d77afdccebb384c1cf0ebf2",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-326000",
	                        "76ad634e3f3f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000: exit status 6 (403.280078ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0226 03:31:37.628923   26655 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-326000" does not appear in /Users/jenkins/minikube-integration/18222-9538/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-326000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (99.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (510.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-326000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0226 03:31:44.388962   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/calico-722000/client.crt: no such file or directory
E0226 03:32:04.888500   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/calico-722000/client.crt: no such file or directory
E0226 03:32:11.860288   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/bridge-722000/client.crt: no such file or directory
E0226 03:32:39.558723   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/bridge-722000/client.crt: no such file or directory
E0226 03:32:43.416626   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/false-722000/client.crt: no such file or directory
E0226 03:32:45.868151   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/calico-722000/client.crt: no such file or directory
E0226 03:32:59.965355   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
E0226 03:33:01.530770   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubenet-722000/client.crt: no such file or directory
E0226 03:33:15.608727   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 03:33:29.216413   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubenet-722000/client.crt: no such file or directory
E0226 03:33:32.566556   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 03:33:40.784844   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/custom-flannel-722000/client.crt: no such file or directory
E0226 03:34:07.790781   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/calico-722000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-326000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m27.791983772s)

                                                
                                                
-- stdout --
	* [old-k8s-version-326000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18222
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18222-9538/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18222-9538/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-326000 in cluster old-k8s-version-326000
	* Pulling base image v0.0.42-1708008208-17936 ...
	* Restarting existing docker container for "old-k8s-version-326000" ...
	* Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0226 03:31:39.719955   26685 out.go:291] Setting OutFile to fd 1 ...
	I0226 03:31:39.720209   26685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 03:31:39.720215   26685 out.go:304] Setting ErrFile to fd 2...
	I0226 03:31:39.720218   26685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 03:31:39.720393   26685 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18222-9538/.minikube/bin
	I0226 03:31:39.721840   26685 out.go:298] Setting JSON to false
	I0226 03:31:39.744414   26685 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":12670,"bootTime":1708934429,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0226 03:31:39.744519   26685 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0226 03:31:39.766194   26685 out.go:177] * [old-k8s-version-326000] minikube v1.32.0 on Darwin 14.3.1
	I0226 03:31:39.808088   26685 out.go:177]   - MINIKUBE_LOCATION=18222
	I0226 03:31:39.808215   26685 notify.go:220] Checking for updates...
	I0226 03:31:39.866864   26685 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18222-9538/kubeconfig
	I0226 03:31:39.924840   26685 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0226 03:31:39.983003   26685 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 03:31:40.003890   26685 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18222-9538/.minikube
	I0226 03:31:40.024863   26685 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0226 03:31:40.046684   26685 config.go:182] Loaded profile config "old-k8s-version-326000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0226 03:31:40.084809   26685 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0226 03:31:40.105724   26685 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 03:31:40.162139   26685 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0226 03:31:40.162313   26685 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 03:31:40.272310   26685 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-26 11:31:40.261659748 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0226 03:31:40.314851   26685 out.go:177] * Using the docker driver based on existing profile
	I0226 03:31:40.359089   26685 start.go:299] selected driver: docker
	I0226 03:31:40.359115   26685 start.go:903] validating driver "docker" against &{Name:old-k8s-version-326000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-326000 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 03:31:40.359235   26685 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0226 03:31:40.363642   26685 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 03:31:40.466980   26685 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-26 11:31:40.45699635 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=
cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker De
v Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) f
or an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0226 03:31:40.467192   26685 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0226 03:31:40.467245   26685 cni.go:84] Creating CNI manager for ""
	I0226 03:31:40.467260   26685 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0226 03:31:40.467269   26685 start_flags.go:323] config:
	{Name:old-k8s-version-326000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-326000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 03:31:40.510262   26685 out.go:177] * Starting control plane node old-k8s-version-326000 in cluster old-k8s-version-326000
	I0226 03:31:40.531301   26685 cache.go:121] Beginning downloading kic base image for docker with docker
	I0226 03:31:40.553330   26685 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0226 03:31:40.595372   26685 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0226 03:31:40.595424   26685 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0226 03:31:40.595451   26685 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0226 03:31:40.595466   26685 cache.go:56] Caching tarball of preloaded images
	I0226 03:31:40.595690   26685 preload.go:174] Found /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0226 03:31:40.595708   26685 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0226 03:31:40.596671   26685 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/config.json ...
	I0226 03:31:40.647617   26685 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0226 03:31:40.647635   26685 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0226 03:31:40.647656   26685 cache.go:194] Successfully downloaded all kic artifacts
	I0226 03:31:40.647704   26685 start.go:365] acquiring machines lock for old-k8s-version-326000: {Name:mk2beedabea14e6b62e464a057af5ed4bd127b3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0226 03:31:40.647795   26685 start.go:369] acquired machines lock for "old-k8s-version-326000" in 72.121µs
	I0226 03:31:40.647818   26685 start.go:96] Skipping create...Using existing machine configuration
	I0226 03:31:40.647827   26685 fix.go:54] fixHost starting: 
	I0226 03:31:40.648056   26685 cli_runner.go:164] Run: docker container inspect old-k8s-version-326000 --format={{.State.Status}}
	I0226 03:31:40.698268   26685 fix.go:102] recreateIfNeeded on old-k8s-version-326000: state=Stopped err=<nil>
	W0226 03:31:40.698329   26685 fix.go:128] unexpected machine state, will restart: <nil>
	I0226 03:31:40.720000   26685 out.go:177] * Restarting existing docker container for "old-k8s-version-326000" ...
	I0226 03:31:40.761865   26685 cli_runner.go:164] Run: docker start old-k8s-version-326000
	I0226 03:31:41.017262   26685 cli_runner.go:164] Run: docker container inspect old-k8s-version-326000 --format={{.State.Status}}
	I0226 03:31:41.068729   26685 kic.go:430] container "old-k8s-version-326000" state is running.
	I0226 03:31:41.069344   26685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-326000
	I0226 03:31:41.125025   26685 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/config.json ...
	I0226 03:31:41.125499   26685 machine.go:88] provisioning docker machine ...
	I0226 03:31:41.125528   26685 ubuntu.go:169] provisioning hostname "old-k8s-version-326000"
	I0226 03:31:41.125595   26685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-326000
	I0226 03:31:41.185683   26685 main.go:141] libmachine: Using SSH client type: native
	I0226 03:31:41.186036   26685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x73cb920] 0x73ce680 <nil>  [] 0s} 127.0.0.1 61949 <nil> <nil>}
	I0226 03:31:41.186054   26685 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-326000 && echo "old-k8s-version-326000" | sudo tee /etc/hostname
	I0226 03:31:41.187874   26685 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0226 03:31:44.347067   26685 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-326000
	
	I0226 03:31:44.347163   26685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-326000
	I0226 03:31:44.397768   26685 main.go:141] libmachine: Using SSH client type: native
	I0226 03:31:44.397997   26685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x73cb920] 0x73ce680 <nil>  [] 0s} 127.0.0.1 61949 <nil> <nil>}
	I0226 03:31:44.398009   26685 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-326000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-326000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-326000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0226 03:31:44.536247   26685 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0226 03:31:44.536271   26685 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/18222-9538/.minikube CaCertPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18222-9538/.minikube}
	I0226 03:31:44.536300   26685 ubuntu.go:177] setting up certificates
	I0226 03:31:44.536315   26685 provision.go:83] configureAuth start
	I0226 03:31:44.536396   26685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-326000
	I0226 03:31:44.587258   26685 provision.go:138] copyHostCerts
	I0226 03:31:44.587360   26685 exec_runner.go:144] found /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.pem, removing ...
	I0226 03:31:44.587371   26685 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.pem
	I0226 03:31:44.587565   26685 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.pem (1082 bytes)
	I0226 03:31:44.587835   26685 exec_runner.go:144] found /Users/jenkins/minikube-integration/18222-9538/.minikube/cert.pem, removing ...
	I0226 03:31:44.587842   26685 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18222-9538/.minikube/cert.pem
	I0226 03:31:44.587939   26685 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18222-9538/.minikube/cert.pem (1123 bytes)
	I0226 03:31:44.588127   26685 exec_runner.go:144] found /Users/jenkins/minikube-integration/18222-9538/.minikube/key.pem, removing ...
	I0226 03:31:44.588133   26685 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18222-9538/.minikube/key.pem
	I0226 03:31:44.588205   26685 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18222-9538/.minikube/key.pem (1675 bytes)
	I0226 03:31:44.588383   26685 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-326000 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-326000]
	I0226 03:31:44.700088   26685 provision.go:172] copyRemoteCerts
	I0226 03:31:44.700155   26685 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0226 03:31:44.700210   26685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-326000
	I0226 03:31:44.750660   26685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61949 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/old-k8s-version-326000/id_rsa Username:docker}
	I0226 03:31:44.853919   26685 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0226 03:31:44.894257   26685 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0226 03:31:44.934524   26685 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0226 03:31:44.974982   26685 provision.go:86] duration metric: configureAuth took 438.648951ms
	I0226 03:31:44.974996   26685 ubuntu.go:193] setting minikube options for container-runtime
	I0226 03:31:44.975144   26685 config.go:182] Loaded profile config "old-k8s-version-326000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0226 03:31:44.975212   26685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-326000
	I0226 03:31:45.026128   26685 main.go:141] libmachine: Using SSH client type: native
	I0226 03:31:45.026295   26685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x73cb920] 0x73ce680 <nil>  [] 0s} 127.0.0.1 61949 <nil> <nil>}
	I0226 03:31:45.026303   26685 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0226 03:31:45.166267   26685 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0226 03:31:45.166287   26685 ubuntu.go:71] root file system type: overlay
	I0226 03:31:45.166384   26685 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0226 03:31:45.166473   26685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-326000
	I0226 03:31:45.216808   26685 main.go:141] libmachine: Using SSH client type: native
	I0226 03:31:45.216980   26685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x73cb920] 0x73ce680 <nil>  [] 0s} 127.0.0.1 61949 <nil> <nil>}
	I0226 03:31:45.217028   26685 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0226 03:31:45.373715   26685 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0226 03:31:45.373863   26685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-326000
	I0226 03:31:45.424439   26685 main.go:141] libmachine: Using SSH client type: native
	I0226 03:31:45.424630   26685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x73cb920] 0x73ce680 <nil>  [] 0s} 127.0.0.1 61949 <nil> <nil>}
	I0226 03:31:45.424642   26685 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0226 03:31:45.572719   26685 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0226 03:31:45.572737   26685 machine.go:91] provisioned docker machine in 4.447205221s
	I0226 03:31:45.572753   26685 start.go:300] post-start starting for "old-k8s-version-326000" (driver="docker")
	I0226 03:31:45.572764   26685 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0226 03:31:45.572834   26685 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0226 03:31:45.572895   26685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-326000
	I0226 03:31:45.624334   26685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61949 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/old-k8s-version-326000/id_rsa Username:docker}
	I0226 03:31:45.727601   26685 ssh_runner.go:195] Run: cat /etc/os-release
	I0226 03:31:45.731955   26685 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0226 03:31:45.731978   26685 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0226 03:31:45.731985   26685 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0226 03:31:45.731991   26685 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0226 03:31:45.732001   26685 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18222-9538/.minikube/addons for local assets ...
	I0226 03:31:45.732095   26685 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18222-9538/.minikube/files for local assets ...
	I0226 03:31:45.732281   26685 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem -> 100262.pem in /etc/ssl/certs
	I0226 03:31:45.732483   26685 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0226 03:31:45.747326   26685 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem --> /etc/ssl/certs/100262.pem (1708 bytes)
	I0226 03:31:45.787171   26685 start.go:303] post-start completed in 214.403372ms
	I0226 03:31:45.787257   26685 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0226 03:31:45.787319   26685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-326000
	I0226 03:31:45.837866   26685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61949 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/old-k8s-version-326000/id_rsa Username:docker}
	I0226 03:31:45.931917   26685 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0226 03:31:45.937466   26685 fix.go:56] fixHost completed within 5.289608461s
	I0226 03:31:45.937486   26685 start.go:83] releasing machines lock for "old-k8s-version-326000", held for 5.289653242s
	I0226 03:31:45.937565   26685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-326000
	I0226 03:31:45.987927   26685 ssh_runner.go:195] Run: cat /version.json
	I0226 03:31:45.987958   26685 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0226 03:31:45.988005   26685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-326000
	I0226 03:31:45.988043   26685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-326000
	I0226 03:31:46.040404   26685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61949 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/old-k8s-version-326000/id_rsa Username:docker}
	I0226 03:31:46.040472   26685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61949 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/old-k8s-version-326000/id_rsa Username:docker}
	I0226 03:31:46.237961   26685 ssh_runner.go:195] Run: systemctl --version
	I0226 03:31:46.243470   26685 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0226 03:31:46.248436   26685 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0226 03:31:46.248497   26685 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0226 03:31:46.263555   26685 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0226 03:31:46.278635   26685 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0226 03:31:46.278653   26685 start.go:475] detecting cgroup driver to use...
	I0226 03:31:46.278665   26685 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0226 03:31:46.278786   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0226 03:31:46.306441   26685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0226 03:31:46.323232   26685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0226 03:31:46.339628   26685 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0226 03:31:46.339697   26685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0226 03:31:46.358111   26685 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0226 03:31:46.375565   26685 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0226 03:31:46.397572   26685 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0226 03:31:46.415254   26685 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0226 03:31:46.431201   26685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0226 03:31:46.447409   26685 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0226 03:31:46.462146   26685 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0226 03:31:46.476865   26685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 03:31:46.534601   26685 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0226 03:31:46.626253   26685 start.go:475] detecting cgroup driver to use...
	I0226 03:31:46.626275   26685 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0226 03:31:46.626339   26685 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0226 03:31:46.644602   26685 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0226 03:31:46.644671   26685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0226 03:31:46.665095   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0226 03:31:46.695405   26685 ssh_runner.go:195] Run: which cri-dockerd
	I0226 03:31:46.700035   26685 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0226 03:31:46.715624   26685 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0226 03:31:46.745344   26685 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0226 03:31:46.811969   26685 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0226 03:31:46.899214   26685 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0226 03:31:46.899302   26685 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0226 03:31:46.928303   26685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 03:31:46.989856   26685 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0226 03:31:47.240643   26685 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0226 03:31:47.262714   26685 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0226 03:31:47.328811   26685 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	I0226 03:31:47.328949   26685 cli_runner.go:164] Run: docker exec -t old-k8s-version-326000 dig +short host.docker.internal
	I0226 03:31:47.438613   26685 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0226 03:31:47.438711   26685 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0226 03:31:47.443156   26685 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 03:31:47.460673   26685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-326000
	I0226 03:31:47.511541   26685 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0226 03:31:47.511615   26685 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0226 03:31:47.530593   26685 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0226 03:31:47.530606   26685 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0226 03:31:47.530674   26685 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0226 03:31:47.546062   26685 ssh_runner.go:195] Run: which lz4
	I0226 03:31:47.550280   26685 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0226 03:31:47.554214   26685 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0226 03:31:47.554253   26685 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0226 03:31:53.673204   26685 docker.go:649] Took 6.122923 seconds to copy over tarball
	I0226 03:31:53.673277   26685 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0226 03:31:55.250430   26685 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.574419871s)
	I0226 03:31:55.250449   26685 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0226 03:31:55.300263   26685 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0226 03:31:55.315858   26685 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0226 03:31:55.345043   26685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 03:31:55.406815   26685 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0226 03:31:56.008853   26685 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0226 03:31:56.028136   26685 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0226 03:31:56.028150   26685 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0226 03:31:56.028156   26685 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0226 03:31:56.033453   26685 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0226 03:31:56.033493   26685 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0226 03:31:56.033541   26685 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 03:31:56.033594   26685 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 03:31:56.034137   26685 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0226 03:31:56.034151   26685 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0226 03:31:56.034222   26685 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0226 03:31:56.035372   26685 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0226 03:31:56.038397   26685 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0226 03:31:56.038475   26685 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 03:31:56.039538   26685 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0226 03:31:56.039603   26685 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 03:31:56.039971   26685 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0226 03:31:56.040222   26685 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0226 03:31:56.040349   26685 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0226 03:31:56.040341   26685 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0226 03:31:58.031308   26685 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0226 03:31:58.050932   26685 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0226 03:31:58.050980   26685 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0226 03:31:58.051038   26685 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0226 03:31:58.070663   26685 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0226 03:31:58.078186   26685 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0226 03:31:58.098984   26685 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0226 03:31:58.099010   26685 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0226 03:31:58.099069   26685 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0226 03:31:58.117639   26685 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0226 03:31:58.122004   26685 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0226 03:31:58.122421   26685 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0226 03:31:58.132263   26685 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 03:31:58.141392   26685 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0226 03:31:58.143097   26685 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0226 03:31:58.143116   26685 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0226 03:31:58.143132   26685 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0226 03:31:58.143136   26685 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0226 03:31:58.143213   26685 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0226 03:31:58.143233   26685 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0226 03:31:58.145459   26685 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0226 03:31:58.153684   26685 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0226 03:31:58.153723   26685 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 03:31:58.153821   26685 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0226 03:31:58.164833   26685 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0226 03:31:58.164880   26685 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0226 03:31:58.164994   26685 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0226 03:31:58.166661   26685 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0226 03:31:58.167830   26685 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0226 03:31:58.172093   26685 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0226 03:31:58.172120   26685 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0226 03:31:58.172187   26685 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0226 03:31:58.179441   26685 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0226 03:31:58.187346   26685 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0226 03:31:58.193195   26685 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0226 03:31:58.891389   26685 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 03:31:58.913172   26685 cache_images.go:92] LoadImages completed in 2.879262294s
	W0226 03:31:58.913223   26685 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0: no such file or directory
	I0226 03:31:58.913308   26685 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0226 03:31:58.965428   26685 cni.go:84] Creating CNI manager for ""
	I0226 03:31:58.965446   26685 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0226 03:31:58.965463   26685 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0226 03:31:58.965476   26685 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-326000 NodeName:old-k8s-version-326000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0226 03:31:58.965570   26685 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-326000"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-326000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.85.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0226 03:31:58.965626   26685 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-326000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-326000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0226 03:31:58.965701   26685 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0226 03:31:58.980588   26685 binaries.go:44] Found k8s binaries, skipping transfer
	I0226 03:31:58.980652   26685 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0226 03:31:58.995450   26685 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0226 03:31:59.024006   26685 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0226 03:31:59.052249   26685 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0226 03:31:59.081315   26685 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0226 03:31:59.085703   26685 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 03:31:59.103365   26685 certs.go:56] Setting up /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000 for IP: 192.168.85.2
	I0226 03:31:59.103394   26685 certs.go:190] acquiring lock for shared ca certs: {Name:mkac1efdcc7c5f1039385f86b148562f7ea05475 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:31:59.103580   26685 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.key
	I0226 03:31:59.103650   26685 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18222-9538/.minikube/proxy-client-ca.key
	I0226 03:31:59.103758   26685 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/client.key
	I0226 03:31:59.103836   26685 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/apiserver.key.43b9df8c
	I0226 03:31:59.103905   26685 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/proxy-client.key
	I0226 03:31:59.104110   26685 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/10026.pem (1338 bytes)
	W0226 03:31:59.104152   26685 certs.go:433] ignoring /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/10026_empty.pem, impossibly tiny 0 bytes
	I0226 03:31:59.104161   26685 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca-key.pem (1675 bytes)
	I0226 03:31:59.104207   26685 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem (1082 bytes)
	I0226 03:31:59.104249   26685 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/cert.pem (1123 bytes)
	I0226 03:31:59.104286   26685 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/key.pem (1675 bytes)
	I0226 03:31:59.104379   26685 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem (1708 bytes)
	I0226 03:31:59.104905   26685 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0226 03:31:59.145873   26685 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0226 03:31:59.186127   26685 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0226 03:31:59.226075   26685 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/old-k8s-version-326000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0226 03:31:59.266179   26685 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0226 03:31:59.307201   26685 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0226 03:31:59.347590   26685 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0226 03:31:59.388955   26685 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0226 03:31:59.430826   26685 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0226 03:31:59.472249   26685 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/10026.pem --> /usr/share/ca-certificates/10026.pem (1338 bytes)
	I0226 03:31:59.512832   26685 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem --> /usr/share/ca-certificates/100262.pem (1708 bytes)
	I0226 03:31:59.552571   26685 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0226 03:31:59.581484   26685 ssh_runner.go:195] Run: openssl version
	I0226 03:31:59.587159   26685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0226 03:31:59.603621   26685 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0226 03:31:59.607850   26685 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 26 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0226 03:31:59.607905   26685 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0226 03:31:59.614686   26685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0226 03:31:59.629995   26685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10026.pem && ln -fs /usr/share/ca-certificates/10026.pem /etc/ssl/certs/10026.pem"
	I0226 03:31:59.646121   26685 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10026.pem
	I0226 03:31:59.651737   26685 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 26 10:36 /usr/share/ca-certificates/10026.pem
	I0226 03:31:59.651803   26685 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10026.pem
	I0226 03:31:59.658228   26685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10026.pem /etc/ssl/certs/51391683.0"
	I0226 03:31:59.673239   26685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100262.pem && ln -fs /usr/share/ca-certificates/100262.pem /etc/ssl/certs/100262.pem"
	I0226 03:31:59.688949   26685 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100262.pem
	I0226 03:31:59.693141   26685 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 26 10:36 /usr/share/ca-certificates/100262.pem
	I0226 03:31:59.693190   26685 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100262.pem
	I0226 03:31:59.700115   26685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100262.pem /etc/ssl/certs/3ec20f2e.0"
	I0226 03:31:59.715045   26685 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0226 03:31:59.719233   26685 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0226 03:31:59.725840   26685 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0226 03:31:59.732589   26685 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0226 03:31:59.761434   26685 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0226 03:31:59.768226   26685 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0226 03:31:59.775002   26685 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0226 03:31:59.781337   26685 kubeadm.go:404] StartCluster: {Name:old-k8s-version-326000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-326000 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 03:31:59.781445   26685 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0226 03:31:59.801871   26685 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0226 03:31:59.816960   26685 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0226 03:31:59.816980   26685 kubeadm.go:636] restartCluster start
	I0226 03:31:59.817031   26685 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0226 03:31:59.831670   26685 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:31:59.831747   26685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-326000
	I0226 03:31:59.883664   26685 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-326000" does not appear in /Users/jenkins/minikube-integration/18222-9538/kubeconfig
	I0226 03:31:59.883827   26685 kubeconfig.go:146] "old-k8s-version-326000" context is missing from /Users/jenkins/minikube-integration/18222-9538/kubeconfig - will repair!
	I0226 03:31:59.884130   26685 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/kubeconfig: {Name:mk55c402e0c5e83ba737512b9e22b403be7d3c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:31:59.885376   26685 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0226 03:31:59.900734   26685 api_server.go:166] Checking apiserver status ...
	I0226 03:31:59.900800   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:31:59.917705   26685 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:32:00.401999   26685 api_server.go:166] Checking apiserver status ...
	I0226 03:32:00.402128   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:32:00.420374   26685 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:32:00.903627   26685 api_server.go:166] Checking apiserver status ...
	I0226 03:32:00.903794   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:32:00.922121   26685 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:32:01.403474   26685 api_server.go:166] Checking apiserver status ...
	I0226 03:32:01.403619   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:32:01.421612   26685 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:32:01.905328   26685 api_server.go:166] Checking apiserver status ...
	I0226 03:32:01.905410   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:32:01.923510   26685 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:32:02.404849   26685 api_server.go:166] Checking apiserver status ...
	I0226 03:32:02.404953   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:32:02.422183   26685 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:32:02.905666   26685 api_server.go:166] Checking apiserver status ...
	I0226 03:32:02.905800   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:32:02.925883   26685 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:32:03.406552   26685 api_server.go:166] Checking apiserver status ...
	I0226 03:32:03.406699   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:32:03.424540   26685 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:32:03.907043   26685 api_server.go:166] Checking apiserver status ...
	I0226 03:32:03.907172   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:32:03.925186   26685 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:32:04.407707   26685 api_server.go:166] Checking apiserver status ...
	I0226 03:32:04.407799   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:32:04.424952   26685 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:32:04.908559   26685 api_server.go:166] Checking apiserver status ...
	I0226 03:32:04.908686   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:32:04.926893   26685 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:32:05.410657   26685 api_server.go:166] Checking apiserver status ...
	I0226 03:32:05.410823   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:32:05.427488   26685 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:32:05.909513   26685 api_server.go:166] Checking apiserver status ...
	I0226 03:32:05.909638   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:32:05.928300   26685 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:32:06.410058   26685 api_server.go:166] Checking apiserver status ...
	I0226 03:32:06.410123   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:32:06.426659   26685 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:32:06.910590   26685 api_server.go:166] Checking apiserver status ...
	I0226 03:32:06.910654   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:32:06.927767   26685 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:32:07.411075   26685 api_server.go:166] Checking apiserver status ...
	I0226 03:32:07.411164   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:32:07.428235   26685 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:32:07.911563   26685 api_server.go:166] Checking apiserver status ...
	I0226 03:32:07.911718   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:32:07.929826   26685 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:32:08.412202   26685 api_server.go:166] Checking apiserver status ...
	I0226 03:32:08.412314   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:32:08.430760   26685 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:32:08.912689   26685 api_server.go:166] Checking apiserver status ...
	I0226 03:32:08.912825   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:32:08.931861   26685 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:32:09.412984   26685 api_server.go:166] Checking apiserver status ...
	I0226 03:32:09.413089   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:32:09.429819   26685 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:32:09.913596   26685 api_server.go:166] Checking apiserver status ...
	I0226 03:32:09.913702   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:32:09.931937   26685 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:32:09.931953   26685 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0226 03:32:09.931962   26685 kubeadm.go:1135] stopping kube-system containers ...
	I0226 03:32:09.932032   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0226 03:32:09.949724   26685 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0226 03:32:09.967770   26685 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0226 03:32:09.983062   26685 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5695 Feb 26 11:28 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5727 Feb 26 11:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5795 Feb 26 11:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5675 Feb 26 11:28 /etc/kubernetes/scheduler.conf
	
	I0226 03:32:09.983128   26685 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0226 03:32:09.997884   26685 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0226 03:32:10.012895   26685 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0226 03:32:10.027894   26685 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0226 03:32:10.043075   26685 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0226 03:32:10.058632   26685 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0226 03:32:10.058653   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0226 03:32:10.130898   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0226 03:32:10.653816   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0226 03:32:10.850033   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0226 03:32:10.927833   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0226 03:32:11.028104   26685 api_server.go:52] waiting for apiserver process to appear ...
	I0226 03:32:11.028172   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:11.529255   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:12.029422   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:12.529571   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:13.030447   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:13.530790   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:14.031187   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:14.531246   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:15.031807   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:15.531619   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:16.032459   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:16.532203   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:17.032747   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:17.533167   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:18.033602   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:18.533724   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:19.033813   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:19.534122   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:20.034389   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:20.534848   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:21.034558   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:21.535035   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:22.035541   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:22.535381   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:23.035608   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:23.535582   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:24.035983   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:24.535910   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:25.037488   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:25.536516   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:26.036846   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:26.536553   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:27.037907   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:27.537303   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:28.037567   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:28.537363   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:29.039033   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:29.538136   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:30.038500   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:30.537974   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:31.037999   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:31.538275   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:32.038651   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:32.539706   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:33.038312   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:33.538828   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:34.039667   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:34.538854   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:35.038966   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:35.538832   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:36.038851   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:36.539310   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:37.039478   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:37.539684   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:38.039709   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:38.539298   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:39.039472   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:39.539888   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:40.039893   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:40.539823   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:41.039641   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:41.540233   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:42.040313   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:42.540425   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:43.040167   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:43.540487   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:44.040866   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:44.540887   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:45.040569   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:45.540874   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:46.041658   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:46.540295   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:47.042034   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:47.540887   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:48.040674   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:48.540556   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:49.040979   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:49.541000   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:50.040618   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:50.540759   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:51.042172   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:51.541266   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:52.041198   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:52.541196   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:53.041323   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:53.541171   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:54.040975   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:54.541030   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:55.041230   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:55.541267   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:56.041627   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:56.543132   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:57.041456   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:57.541304   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:58.041417   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:58.542821   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:59.041733   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:32:59.541244   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:00.041609   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:00.541592   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:01.041750   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:01.541843   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:02.041618   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:02.541810   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:03.042748   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:03.541598   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:04.041341   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:04.541858   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:05.041784   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:05.541650   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:06.041931   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:06.541465   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:07.042799   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:07.541867   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:08.042103   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:08.541807   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:09.041561   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:09.542249   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:10.043717   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:10.541967   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:11.041795   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:33:11.062034   26685 logs.go:276] 0 containers: []
	W0226 03:33:11.062047   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:33:11.062113   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:33:11.081416   26685 logs.go:276] 0 containers: []
	W0226 03:33:11.081430   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:33:11.081489   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:33:11.102185   26685 logs.go:276] 0 containers: []
	W0226 03:33:11.102204   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:33:11.102277   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:33:11.122402   26685 logs.go:276] 0 containers: []
	W0226 03:33:11.122414   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:33:11.122473   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:33:11.143952   26685 logs.go:276] 0 containers: []
	W0226 03:33:11.143967   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:33:11.144033   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:33:11.162404   26685 logs.go:276] 0 containers: []
	W0226 03:33:11.162418   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:33:11.162483   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:33:11.195266   26685 logs.go:276] 0 containers: []
	W0226 03:33:11.195305   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:33:11.195402   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:33:11.214268   26685 logs.go:276] 0 containers: []
	W0226 03:33:11.214282   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:33:11.214296   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:33:11.214304   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:33:11.235345   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:33:11.235359   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:33:11.300033   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:33:11.300049   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:33:11.340236   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:33:11.340252   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:33:11.360214   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:33:11.360230   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:33:11.427926   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:33:13.928212   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:13.946472   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:33:13.964588   26685 logs.go:276] 0 containers: []
	W0226 03:33:13.964603   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:33:13.964677   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:33:13.983318   26685 logs.go:276] 0 containers: []
	W0226 03:33:13.983332   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:33:13.983406   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:33:14.001955   26685 logs.go:276] 0 containers: []
	W0226 03:33:14.001976   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:33:14.002043   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:33:14.020551   26685 logs.go:276] 0 containers: []
	W0226 03:33:14.020565   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:33:14.020637   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:33:14.039347   26685 logs.go:276] 0 containers: []
	W0226 03:33:14.039362   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:33:14.039439   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:33:14.058860   26685 logs.go:276] 0 containers: []
	W0226 03:33:14.058874   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:33:14.058954   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:33:14.077985   26685 logs.go:276] 0 containers: []
	W0226 03:33:14.078001   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:33:14.078074   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:33:14.097378   26685 logs.go:276] 0 containers: []
	W0226 03:33:14.097394   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:33:14.097402   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:33:14.097409   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:33:14.142040   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:33:14.142060   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:33:14.162361   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:33:14.162378   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:33:14.230562   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:33:14.230577   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:33:14.230585   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:33:14.251360   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:33:14.251375   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:33:16.815813   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:16.833048   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:33:16.852258   26685 logs.go:276] 0 containers: []
	W0226 03:33:16.852278   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:33:16.852395   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:33:16.871488   26685 logs.go:276] 0 containers: []
	W0226 03:33:16.871502   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:33:16.871571   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:33:16.890579   26685 logs.go:276] 0 containers: []
	W0226 03:33:16.890595   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:33:16.890662   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:33:16.910394   26685 logs.go:276] 0 containers: []
	W0226 03:33:16.910409   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:33:16.910491   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:33:16.929379   26685 logs.go:276] 0 containers: []
	W0226 03:33:16.929393   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:33:16.929457   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:33:16.947966   26685 logs.go:276] 0 containers: []
	W0226 03:33:16.947980   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:33:16.948045   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:33:16.967194   26685 logs.go:276] 0 containers: []
	W0226 03:33:16.967209   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:33:16.967268   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:33:16.986253   26685 logs.go:276] 0 containers: []
	W0226 03:33:16.986268   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:33:16.986275   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:33:16.986282   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:33:17.005564   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:33:17.005580   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:33:17.070113   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:33:17.070131   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:33:17.070143   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:33:17.091513   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:33:17.091529   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:33:17.213237   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:33:17.213252   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:33:19.755259   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:19.772358   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:33:19.799106   26685 logs.go:276] 0 containers: []
	W0226 03:33:19.799119   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:33:19.799199   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:33:19.817414   26685 logs.go:276] 0 containers: []
	W0226 03:33:19.817427   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:33:19.817504   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:33:19.836320   26685 logs.go:276] 0 containers: []
	W0226 03:33:19.836335   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:33:19.836401   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:33:19.854509   26685 logs.go:276] 0 containers: []
	W0226 03:33:19.854525   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:33:19.854589   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:33:19.873086   26685 logs.go:276] 0 containers: []
	W0226 03:33:19.873107   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:33:19.873214   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:33:19.892575   26685 logs.go:276] 0 containers: []
	W0226 03:33:19.892591   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:33:19.892664   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:33:19.911957   26685 logs.go:276] 0 containers: []
	W0226 03:33:19.911971   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:33:19.912039   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:33:19.931393   26685 logs.go:276] 0 containers: []
	W0226 03:33:19.931408   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:33:19.931415   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:33:19.931423   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:33:19.972704   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:33:19.972718   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:33:19.992587   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:33:19.992603   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:33:20.060106   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:33:20.060124   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:33:20.060134   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:33:20.080686   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:33:20.080702   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:33:22.645232   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:22.662562   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:33:22.681744   26685 logs.go:276] 0 containers: []
	W0226 03:33:22.681759   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:33:22.681844   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:33:22.700687   26685 logs.go:276] 0 containers: []
	W0226 03:33:22.700700   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:33:22.700766   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:33:22.719458   26685 logs.go:276] 0 containers: []
	W0226 03:33:22.719475   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:33:22.719549   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:33:22.737699   26685 logs.go:276] 0 containers: []
	W0226 03:33:22.737714   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:33:22.737784   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:33:22.758028   26685 logs.go:276] 0 containers: []
	W0226 03:33:22.758044   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:33:22.758119   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:33:22.776810   26685 logs.go:276] 0 containers: []
	W0226 03:33:22.776825   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:33:22.776903   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:33:22.796031   26685 logs.go:276] 0 containers: []
	W0226 03:33:22.796046   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:33:22.796116   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:33:22.813646   26685 logs.go:276] 0 containers: []
	W0226 03:33:22.813667   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:33:22.813674   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:33:22.813681   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:33:22.836237   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:33:22.836252   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:33:22.899495   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:33:22.899511   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:33:22.939554   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:33:22.939569   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:33:22.959989   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:33:22.960009   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:33:23.025552   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:33:25.526489   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:25.545671   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:33:25.563950   26685 logs.go:276] 0 containers: []
	W0226 03:33:25.563970   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:33:25.564057   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:33:25.583324   26685 logs.go:276] 0 containers: []
	W0226 03:33:25.583342   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:33:25.583414   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:33:25.602627   26685 logs.go:276] 0 containers: []
	W0226 03:33:25.602642   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:33:25.602714   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:33:25.627509   26685 logs.go:276] 0 containers: []
	W0226 03:33:25.627524   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:33:25.627605   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:33:25.645607   26685 logs.go:276] 0 containers: []
	W0226 03:33:25.645625   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:33:25.645748   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:33:25.694561   26685 logs.go:276] 0 containers: []
	W0226 03:33:25.694574   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:33:25.694634   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:33:25.713830   26685 logs.go:276] 0 containers: []
	W0226 03:33:25.713845   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:33:25.713910   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:33:25.732104   26685 logs.go:276] 0 containers: []
	W0226 03:33:25.732120   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:33:25.732127   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:33:25.732134   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:33:25.751646   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:33:25.751661   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:33:25.819279   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:33:25.819294   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:33:25.819302   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:33:25.840213   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:33:25.840228   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:33:25.903832   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:33:25.903846   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:33:28.445566   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:28.463800   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:33:28.481496   26685 logs.go:276] 0 containers: []
	W0226 03:33:28.481512   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:33:28.481587   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:33:28.500231   26685 logs.go:276] 0 containers: []
	W0226 03:33:28.500279   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:33:28.500356   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:33:28.518338   26685 logs.go:276] 0 containers: []
	W0226 03:33:28.518353   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:33:28.518426   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:33:28.537275   26685 logs.go:276] 0 containers: []
	W0226 03:33:28.537291   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:33:28.537363   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:33:28.555846   26685 logs.go:276] 0 containers: []
	W0226 03:33:28.555861   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:33:28.555931   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:33:28.574717   26685 logs.go:276] 0 containers: []
	W0226 03:33:28.574731   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:33:28.574816   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:33:28.592995   26685 logs.go:276] 0 containers: []
	W0226 03:33:28.593018   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:33:28.593102   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:33:28.612259   26685 logs.go:276] 0 containers: []
	W0226 03:33:28.612272   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:33:28.612286   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:33:28.612294   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:33:28.653182   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:33:28.653196   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:33:28.672850   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:33:28.672865   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:33:28.750695   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:33:28.750711   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:33:28.750720   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:33:28.771560   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:33:28.771579   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:33:31.335467   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:31.352783   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:33:31.375398   26685 logs.go:276] 0 containers: []
	W0226 03:33:31.375418   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:33:31.375508   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:33:31.401319   26685 logs.go:276] 0 containers: []
	W0226 03:33:31.401338   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:33:31.401424   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:33:31.422907   26685 logs.go:276] 0 containers: []
	W0226 03:33:31.422923   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:33:31.423018   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:33:31.443382   26685 logs.go:276] 0 containers: []
	W0226 03:33:31.443396   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:33:31.443463   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:33:31.461403   26685 logs.go:276] 0 containers: []
	W0226 03:33:31.461424   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:33:31.461498   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:33:31.499386   26685 logs.go:276] 0 containers: []
	W0226 03:33:31.499399   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:33:31.499464   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:33:31.516880   26685 logs.go:276] 0 containers: []
	W0226 03:33:31.516895   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:33:31.516978   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:33:31.535157   26685 logs.go:276] 0 containers: []
	W0226 03:33:31.535171   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:33:31.535179   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:33:31.535186   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:33:31.576072   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:33:31.576086   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:33:31.596136   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:33:31.596160   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:33:31.665307   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:33:31.665322   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:33:31.665330   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:33:31.686691   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:33:31.686706   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:33:34.250242   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:34.267908   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:33:34.286312   26685 logs.go:276] 0 containers: []
	W0226 03:33:34.286326   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:33:34.286409   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:33:34.304028   26685 logs.go:276] 0 containers: []
	W0226 03:33:34.304047   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:33:34.304119   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:33:34.321873   26685 logs.go:276] 0 containers: []
	W0226 03:33:34.321888   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:33:34.321976   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:33:34.340545   26685 logs.go:276] 0 containers: []
	W0226 03:33:34.340561   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:33:34.340625   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:33:34.359864   26685 logs.go:276] 0 containers: []
	W0226 03:33:34.359878   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:33:34.359961   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:33:34.380027   26685 logs.go:276] 0 containers: []
	W0226 03:33:34.380088   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:33:34.380212   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:33:34.413586   26685 logs.go:276] 0 containers: []
	W0226 03:33:34.413603   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:33:34.413679   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:33:34.433088   26685 logs.go:276] 0 containers: []
	W0226 03:33:34.433104   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:33:34.433130   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:33:34.433142   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:33:34.452786   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:33:34.452802   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:33:34.520973   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:33:34.520985   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:33:34.520993   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:33:34.541741   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:33:34.541756   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:33:34.604522   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:33:34.604537   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:33:37.146775   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:37.164280   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:33:37.184594   26685 logs.go:276] 0 containers: []
	W0226 03:33:37.184609   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:33:37.184695   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:33:37.201736   26685 logs.go:276] 0 containers: []
	W0226 03:33:37.201752   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:33:37.201825   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:33:37.218404   26685 logs.go:276] 0 containers: []
	W0226 03:33:37.218443   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:33:37.218512   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:33:37.236115   26685 logs.go:276] 0 containers: []
	W0226 03:33:37.236135   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:33:37.236254   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:33:37.253651   26685 logs.go:276] 0 containers: []
	W0226 03:33:37.253665   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:33:37.253739   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:33:37.271872   26685 logs.go:276] 0 containers: []
	W0226 03:33:37.271886   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:33:37.271961   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:33:37.290648   26685 logs.go:276] 0 containers: []
	W0226 03:33:37.290662   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:33:37.290741   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:33:37.307409   26685 logs.go:276] 0 containers: []
	W0226 03:33:37.307423   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:33:37.307430   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:33:37.307438   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:33:37.351795   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:33:37.351823   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:33:37.377075   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:33:37.377101   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:33:37.448724   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:33:37.448737   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:33:37.448755   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:33:37.472381   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:33:37.472421   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:33:40.035928   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:40.053751   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:33:40.071590   26685 logs.go:276] 0 containers: []
	W0226 03:33:40.071605   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:33:40.071673   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:33:40.090035   26685 logs.go:276] 0 containers: []
	W0226 03:33:40.090053   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:33:40.090138   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:33:40.108579   26685 logs.go:276] 0 containers: []
	W0226 03:33:40.108596   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:33:40.108668   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:33:40.126785   26685 logs.go:276] 0 containers: []
	W0226 03:33:40.126800   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:33:40.126882   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:33:40.144063   26685 logs.go:276] 0 containers: []
	W0226 03:33:40.144087   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:33:40.144181   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:33:40.163239   26685 logs.go:276] 0 containers: []
	W0226 03:33:40.163259   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:33:40.163349   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:33:40.182851   26685 logs.go:276] 0 containers: []
	W0226 03:33:40.182866   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:33:40.182943   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:33:40.200741   26685 logs.go:276] 0 containers: []
	W0226 03:33:40.200759   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:33:40.200767   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:33:40.200781   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:33:40.223257   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:33:40.223280   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:33:40.287273   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:33:40.287295   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:33:40.331450   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:33:40.331473   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:33:40.354003   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:33:40.354023   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:33:40.424357   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:33:42.924687   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:42.942453   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:33:42.959655   26685 logs.go:276] 0 containers: []
	W0226 03:33:42.959671   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:33:42.959749   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:33:42.978412   26685 logs.go:276] 0 containers: []
	W0226 03:33:42.978427   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:33:42.978502   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:33:42.995774   26685 logs.go:276] 0 containers: []
	W0226 03:33:42.995789   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:33:42.995857   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:33:43.013313   26685 logs.go:276] 0 containers: []
	W0226 03:33:43.013329   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:33:43.013399   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:33:43.029359   26685 logs.go:276] 0 containers: []
	W0226 03:33:43.029376   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:33:43.029442   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:33:43.049996   26685 logs.go:276] 0 containers: []
	W0226 03:33:43.050012   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:33:43.050088   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:33:43.067443   26685 logs.go:276] 0 containers: []
	W0226 03:33:43.067457   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:33:43.067524   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:33:43.084680   26685 logs.go:276] 0 containers: []
	W0226 03:33:43.084694   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:33:43.084701   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:33:43.084708   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:33:43.143401   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:33:43.143417   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:33:43.184412   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:33:43.184427   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:33:43.203974   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:33:43.203991   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:33:43.269125   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:33:43.269137   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:33:43.269145   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:33:45.790619   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:45.809273   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:33:45.827495   26685 logs.go:276] 0 containers: []
	W0226 03:33:45.827510   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:33:45.827586   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:33:45.844752   26685 logs.go:276] 0 containers: []
	W0226 03:33:45.844768   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:33:45.844837   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:33:45.864849   26685 logs.go:276] 0 containers: []
	W0226 03:33:45.864866   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:33:45.864941   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:33:45.884677   26685 logs.go:276] 0 containers: []
	W0226 03:33:45.884696   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:33:45.884786   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:33:45.905607   26685 logs.go:276] 0 containers: []
	W0226 03:33:45.905624   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:33:45.905699   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:33:45.925909   26685 logs.go:276] 0 containers: []
	W0226 03:33:45.925924   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:33:45.925989   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:33:45.944172   26685 logs.go:276] 0 containers: []
	W0226 03:33:45.944186   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:33:45.944253   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:33:45.963811   26685 logs.go:276] 0 containers: []
	W0226 03:33:45.963826   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:33:45.963832   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:33:45.963839   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:33:45.988833   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:33:45.988850   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:33:46.053340   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:33:46.053356   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:33:46.097853   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:33:46.097872   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:33:46.120931   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:33:46.120953   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:33:46.193926   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:33:48.694166   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:48.710741   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:33:48.726358   26685 logs.go:276] 0 containers: []
	W0226 03:33:48.726373   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:33:48.726438   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:33:48.742473   26685 logs.go:276] 0 containers: []
	W0226 03:33:48.742487   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:33:48.742560   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:33:48.759915   26685 logs.go:276] 0 containers: []
	W0226 03:33:48.759938   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:33:48.760052   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:33:48.778756   26685 logs.go:276] 0 containers: []
	W0226 03:33:48.778771   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:33:48.778839   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:33:48.795170   26685 logs.go:276] 0 containers: []
	W0226 03:33:48.795184   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:33:48.795255   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:33:48.811544   26685 logs.go:276] 0 containers: []
	W0226 03:33:48.811558   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:33:48.811635   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:33:48.827497   26685 logs.go:276] 0 containers: []
	W0226 03:33:48.827512   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:33:48.827576   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:33:48.843863   26685 logs.go:276] 0 containers: []
	W0226 03:33:48.843878   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:33:48.843885   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:33:48.843892   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:33:48.903478   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:33:48.903491   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:33:48.944675   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:33:48.944690   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:33:48.963987   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:33:48.964001   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:33:49.025102   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:33:49.025113   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:33:49.025125   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:33:51.547348   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:51.566028   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:33:51.582985   26685 logs.go:276] 0 containers: []
	W0226 03:33:51.582999   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:33:51.583067   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:33:51.600905   26685 logs.go:276] 0 containers: []
	W0226 03:33:51.600924   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:33:51.600998   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:33:51.619830   26685 logs.go:276] 0 containers: []
	W0226 03:33:51.619846   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:33:51.619926   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:33:51.637543   26685 logs.go:276] 0 containers: []
	W0226 03:33:51.637562   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:33:51.637633   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:33:51.654341   26685 logs.go:276] 0 containers: []
	W0226 03:33:51.654359   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:33:51.654430   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:33:51.670608   26685 logs.go:276] 0 containers: []
	W0226 03:33:51.670627   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:33:51.670699   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:33:51.688436   26685 logs.go:276] 0 containers: []
	W0226 03:33:51.688452   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:33:51.688519   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:33:51.704858   26685 logs.go:276] 0 containers: []
	W0226 03:33:51.704872   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:33:51.704880   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:33:51.704887   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:33:51.746576   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:33:51.746592   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:33:51.766912   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:33:51.766927   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:33:51.831786   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:33:51.831801   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:33:51.831809   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:33:51.853771   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:33:51.853787   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:33:54.418558   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:54.436756   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:33:54.454275   26685 logs.go:276] 0 containers: []
	W0226 03:33:54.454289   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:33:54.454362   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:33:54.471179   26685 logs.go:276] 0 containers: []
	W0226 03:33:54.471194   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:33:54.471263   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:33:54.488415   26685 logs.go:276] 0 containers: []
	W0226 03:33:54.488429   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:33:54.488499   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:33:54.504907   26685 logs.go:276] 0 containers: []
	W0226 03:33:54.504923   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:33:54.504995   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:33:54.522770   26685 logs.go:276] 0 containers: []
	W0226 03:33:54.522785   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:33:54.522850   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:33:54.539598   26685 logs.go:276] 0 containers: []
	W0226 03:33:54.539612   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:33:54.539688   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:33:54.555685   26685 logs.go:276] 0 containers: []
	W0226 03:33:54.555699   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:33:54.555760   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:33:54.571754   26685 logs.go:276] 0 containers: []
	W0226 03:33:54.571770   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:33:54.571779   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:33:54.571794   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:33:54.613767   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:33:54.613785   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:33:54.635209   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:33:54.635229   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:33:54.703106   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:33:54.703117   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:33:54.703126   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:33:54.724175   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:33:54.724190   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:33:57.289698   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:33:57.307466   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:33:57.327062   26685 logs.go:276] 0 containers: []
	W0226 03:33:57.327077   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:33:57.327149   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:33:57.344964   26685 logs.go:276] 0 containers: []
	W0226 03:33:57.344980   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:33:57.345062   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:33:57.364859   26685 logs.go:276] 0 containers: []
	W0226 03:33:57.364876   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:33:57.364956   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:33:57.383737   26685 logs.go:276] 0 containers: []
	W0226 03:33:57.383753   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:33:57.383821   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:33:57.414764   26685 logs.go:276] 0 containers: []
	W0226 03:33:57.414782   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:33:57.414864   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:33:57.507470   26685 logs.go:276] 0 containers: []
	W0226 03:33:57.507484   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:33:57.507556   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:33:57.527469   26685 logs.go:276] 0 containers: []
	W0226 03:33:57.527484   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:33:57.527555   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:33:57.546426   26685 logs.go:276] 0 containers: []
	W0226 03:33:57.546450   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:33:57.546463   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:33:57.546477   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:33:57.587533   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:33:57.587559   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:33:57.608408   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:33:57.608424   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:33:57.677188   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:33:57.677201   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:33:57.677209   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:33:57.698169   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:33:57.698183   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:34:00.258098   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:34:00.277353   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:34:00.295301   26685 logs.go:276] 0 containers: []
	W0226 03:34:00.295315   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:34:00.295387   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:34:00.313601   26685 logs.go:276] 0 containers: []
	W0226 03:34:00.313616   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:34:00.313681   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:34:00.331735   26685 logs.go:276] 0 containers: []
	W0226 03:34:00.331749   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:34:00.331817   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:34:00.348733   26685 logs.go:276] 0 containers: []
	W0226 03:34:00.348748   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:34:00.348817   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:34:00.366702   26685 logs.go:276] 0 containers: []
	W0226 03:34:00.366716   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:34:00.366781   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:34:00.384300   26685 logs.go:276] 0 containers: []
	W0226 03:34:00.384315   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:34:00.384384   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:34:00.403309   26685 logs.go:276] 0 containers: []
	W0226 03:34:00.403324   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:34:00.403397   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:34:00.423605   26685 logs.go:276] 0 containers: []
	W0226 03:34:00.423620   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:34:00.423628   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:34:00.423635   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:34:00.444494   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:34:00.444509   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:34:00.506782   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:34:00.506797   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:34:00.547770   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:34:00.547786   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:34:00.567277   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:34:00.567292   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:34:00.639807   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:34:03.140034   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:34:03.163011   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:34:03.193314   26685 logs.go:276] 0 containers: []
	W0226 03:34:03.193334   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:34:03.193434   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:34:03.235046   26685 logs.go:276] 0 containers: []
	W0226 03:34:03.235066   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:34:03.235164   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:34:03.256833   26685 logs.go:276] 0 containers: []
	W0226 03:34:03.256847   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:34:03.256914   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:34:03.276428   26685 logs.go:276] 0 containers: []
	W0226 03:34:03.276443   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:34:03.276512   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:34:03.295926   26685 logs.go:276] 0 containers: []
	W0226 03:34:03.295941   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:34:03.296019   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:34:03.313346   26685 logs.go:276] 0 containers: []
	W0226 03:34:03.313361   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:34:03.313427   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:34:03.332750   26685 logs.go:276] 0 containers: []
	W0226 03:34:03.332765   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:34:03.332837   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:34:03.351083   26685 logs.go:276] 0 containers: []
	W0226 03:34:03.351097   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:34:03.351104   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:34:03.351112   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:34:03.421386   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:34:03.421400   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:34:03.463960   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:34:03.463979   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:34:03.484917   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:34:03.484934   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:34:03.585666   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:34:03.585684   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:34:03.585699   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:34:06.112994   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:34:06.134005   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:34:06.152711   26685 logs.go:276] 0 containers: []
	W0226 03:34:06.152725   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:34:06.152791   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:34:06.172812   26685 logs.go:276] 0 containers: []
	W0226 03:34:06.172833   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:34:06.172962   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:34:06.195828   26685 logs.go:276] 0 containers: []
	W0226 03:34:06.195844   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:34:06.195916   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:34:06.218844   26685 logs.go:276] 0 containers: []
	W0226 03:34:06.218858   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:34:06.218923   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:34:06.238702   26685 logs.go:276] 0 containers: []
	W0226 03:34:06.238744   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:34:06.238813   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:34:06.259749   26685 logs.go:276] 0 containers: []
	W0226 03:34:06.259768   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:34:06.259848   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:34:06.280643   26685 logs.go:276] 0 containers: []
	W0226 03:34:06.280673   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:34:06.280763   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:34:06.301204   26685 logs.go:276] 0 containers: []
	W0226 03:34:06.301223   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:34:06.301234   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:34:06.301247   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:34:06.345820   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:34:06.345837   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:34:06.369961   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:34:06.369985   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:34:06.458346   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:34:06.458358   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:34:06.458365   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:34:06.505567   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:34:06.505581   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:34:09.068712   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:34:09.087482   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:34:09.107717   26685 logs.go:276] 0 containers: []
	W0226 03:34:09.107733   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:34:09.107809   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:34:09.126966   26685 logs.go:276] 0 containers: []
	W0226 03:34:09.126985   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:34:09.127059   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:34:09.146913   26685 logs.go:276] 0 containers: []
	W0226 03:34:09.146926   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:34:09.146995   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:34:09.167189   26685 logs.go:276] 0 containers: []
	W0226 03:34:09.167209   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:34:09.167281   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:34:09.187778   26685 logs.go:276] 0 containers: []
	W0226 03:34:09.187797   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:34:09.187886   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:34:09.208145   26685 logs.go:276] 0 containers: []
	W0226 03:34:09.208162   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:34:09.208228   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:34:09.226664   26685 logs.go:276] 0 containers: []
	W0226 03:34:09.226685   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:34:09.226757   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:34:09.247037   26685 logs.go:276] 0 containers: []
	W0226 03:34:09.247056   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:34:09.247065   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:34:09.247077   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:34:09.314264   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:34:09.314279   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:34:09.358455   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:34:09.358472   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:34:09.379455   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:34:09.379471   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:34:09.448001   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:34:09.448012   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:34:09.448027   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:34:11.970743   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:34:11.989088   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:34:12.009808   26685 logs.go:276] 0 containers: []
	W0226 03:34:12.009825   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:34:12.009892   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:34:12.029077   26685 logs.go:276] 0 containers: []
	W0226 03:34:12.029095   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:34:12.029168   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:34:12.048369   26685 logs.go:276] 0 containers: []
	W0226 03:34:12.048383   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:34:12.048452   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:34:12.069306   26685 logs.go:276] 0 containers: []
	W0226 03:34:12.069320   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:34:12.069391   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:34:12.090016   26685 logs.go:276] 0 containers: []
	W0226 03:34:12.090034   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:34:12.090110   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:34:12.109585   26685 logs.go:276] 0 containers: []
	W0226 03:34:12.109607   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:34:12.109720   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:34:12.132315   26685 logs.go:276] 0 containers: []
	W0226 03:34:12.132328   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:34:12.132406   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:34:12.155341   26685 logs.go:276] 0 containers: []
	W0226 03:34:12.155355   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:34:12.155362   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:34:12.155370   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:34:12.229958   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:34:12.229986   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:34:12.251733   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:34:12.251753   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:34:12.320923   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:34:12.320936   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:34:12.320944   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:34:12.344039   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:34:12.344058   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:34:14.911130   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:34:14.928355   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:34:14.946654   26685 logs.go:276] 0 containers: []
	W0226 03:34:14.946669   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:34:14.946776   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:34:14.964836   26685 logs.go:276] 0 containers: []
	W0226 03:34:14.964851   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:34:14.964920   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:34:14.983136   26685 logs.go:276] 0 containers: []
	W0226 03:34:14.983161   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:34:14.983233   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:34:15.001352   26685 logs.go:276] 0 containers: []
	W0226 03:34:15.001366   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:34:15.001438   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:34:15.019173   26685 logs.go:276] 0 containers: []
	W0226 03:34:15.019188   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:34:15.019254   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:34:15.037386   26685 logs.go:276] 0 containers: []
	W0226 03:34:15.037401   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:34:15.037476   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:34:15.056840   26685 logs.go:276] 0 containers: []
	W0226 03:34:15.056858   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:34:15.056928   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:34:15.075479   26685 logs.go:276] 0 containers: []
	W0226 03:34:15.075494   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:34:15.075501   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:34:15.075508   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:34:15.097446   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:34:15.097460   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:34:15.160928   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:34:15.160941   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:34:15.203784   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:34:15.203800   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:34:15.223545   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:34:15.223562   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:34:15.293161   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:34:17.793412   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:34:17.812738   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:34:17.834956   26685 logs.go:276] 0 containers: []
	W0226 03:34:17.834972   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:34:17.835046   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:34:17.854973   26685 logs.go:276] 0 containers: []
	W0226 03:34:17.854988   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:34:17.855056   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:34:17.874358   26685 logs.go:276] 0 containers: []
	W0226 03:34:17.874371   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:34:17.874446   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:34:17.896418   26685 logs.go:276] 0 containers: []
	W0226 03:34:17.896432   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:34:17.896500   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:34:17.917927   26685 logs.go:276] 0 containers: []
	W0226 03:34:17.917941   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:34:17.918005   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:34:17.937816   26685 logs.go:276] 0 containers: []
	W0226 03:34:17.937832   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:34:17.937938   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:34:17.959061   26685 logs.go:276] 0 containers: []
	W0226 03:34:17.959076   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:34:17.959142   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:34:17.978516   26685 logs.go:276] 0 containers: []
	W0226 03:34:17.978530   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:34:17.978537   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:34:17.978544   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:34:18.025239   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:34:18.025257   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:34:18.046852   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:34:18.046869   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:34:18.119049   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:34:18.119068   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:34:18.119079   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:34:18.142679   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:34:18.142706   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:34:20.709714   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:34:20.727526   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:34:20.745018   26685 logs.go:276] 0 containers: []
	W0226 03:34:20.745033   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:34:20.745098   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:34:20.763807   26685 logs.go:276] 0 containers: []
	W0226 03:34:20.763842   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:34:20.763931   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:34:20.783944   26685 logs.go:276] 0 containers: []
	W0226 03:34:20.783971   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:34:20.784067   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:34:20.803187   26685 logs.go:276] 0 containers: []
	W0226 03:34:20.803202   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:34:20.803277   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:34:20.821266   26685 logs.go:276] 0 containers: []
	W0226 03:34:20.821281   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:34:20.821349   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:34:20.840026   26685 logs.go:276] 0 containers: []
	W0226 03:34:20.840046   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:34:20.840123   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:34:20.859668   26685 logs.go:276] 0 containers: []
	W0226 03:34:20.859684   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:34:20.859768   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:34:20.879573   26685 logs.go:276] 0 containers: []
	W0226 03:34:20.879588   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:34:20.879595   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:34:20.879603   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:34:20.900536   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:34:20.900560   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:34:20.964534   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:34:20.964545   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:34:20.964555   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:34:20.985454   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:34:20.985469   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:34:21.047500   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:34:21.047515   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:34:23.589848   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:34:23.607360   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:34:23.623913   26685 logs.go:276] 0 containers: []
	W0226 03:34:23.623929   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:34:23.624004   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:34:23.642061   26685 logs.go:276] 0 containers: []
	W0226 03:34:23.642074   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:34:23.642150   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:34:23.659303   26685 logs.go:276] 0 containers: []
	W0226 03:34:23.659318   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:34:23.659385   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:34:23.677153   26685 logs.go:276] 0 containers: []
	W0226 03:34:23.677168   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:34:23.677234   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:34:23.694707   26685 logs.go:276] 0 containers: []
	W0226 03:34:23.694723   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:34:23.694795   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:34:23.712183   26685 logs.go:276] 0 containers: []
	W0226 03:34:23.712198   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:34:23.712261   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:34:23.729911   26685 logs.go:276] 0 containers: []
	W0226 03:34:23.729927   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:34:23.729992   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:34:23.749131   26685 logs.go:276] 0 containers: []
	W0226 03:34:23.749145   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:34:23.749154   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:34:23.749160   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:34:23.790580   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:34:23.790603   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:34:23.811780   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:34:23.811798   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:34:23.872429   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:34:23.872441   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:34:23.872449   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:34:23.893040   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:34:23.893055   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:34:26.454569   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:34:26.472112   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:34:26.490582   26685 logs.go:276] 0 containers: []
	W0226 03:34:26.490603   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:34:26.490696   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:34:26.508322   26685 logs.go:276] 0 containers: []
	W0226 03:34:26.508338   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:34:26.508407   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:34:26.525198   26685 logs.go:276] 0 containers: []
	W0226 03:34:26.525212   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:34:26.525278   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:34:26.541584   26685 logs.go:276] 0 containers: []
	W0226 03:34:26.541595   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:34:26.541659   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:34:26.558909   26685 logs.go:276] 0 containers: []
	W0226 03:34:26.558934   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:34:26.559006   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:34:26.575931   26685 logs.go:276] 0 containers: []
	W0226 03:34:26.575961   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:34:26.576036   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:34:26.593944   26685 logs.go:276] 0 containers: []
	W0226 03:34:26.593957   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:34:26.594012   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:34:26.612213   26685 logs.go:276] 0 containers: []
	W0226 03:34:26.612231   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:34:26.612245   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:34:26.612256   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:34:26.676749   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:34:26.676763   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:34:26.676771   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:34:26.697389   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:34:26.697405   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:34:26.770390   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:34:26.770402   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:34:26.815434   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:34:26.815453   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:34:29.338773   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:34:29.356292   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:34:29.373566   26685 logs.go:276] 0 containers: []
	W0226 03:34:29.373581   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:34:29.373657   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:34:29.391010   26685 logs.go:276] 0 containers: []
	W0226 03:34:29.391024   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:34:29.391088   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:34:29.408431   26685 logs.go:276] 0 containers: []
	W0226 03:34:29.408448   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:34:29.408513   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:34:29.425739   26685 logs.go:276] 0 containers: []
	W0226 03:34:29.425755   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:34:29.425813   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:34:29.443710   26685 logs.go:276] 0 containers: []
	W0226 03:34:29.443726   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:34:29.443788   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:34:29.461970   26685 logs.go:276] 0 containers: []
	W0226 03:34:29.461985   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:34:29.462055   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:34:29.480206   26685 logs.go:276] 0 containers: []
	W0226 03:34:29.480222   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:34:29.480294   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:34:29.498911   26685 logs.go:276] 0 containers: []
	W0226 03:34:29.498927   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:34:29.498937   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:34:29.498945   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:34:29.594491   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:34:29.594509   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:34:29.641884   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:34:29.641918   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:34:29.707893   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:34:29.707914   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:34:29.772507   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:34:29.786685   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:34:29.786698   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:34:32.309851   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:34:32.331504   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:34:32.350097   26685 logs.go:276] 0 containers: []
	W0226 03:34:32.350118   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:34:32.350210   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:34:32.369738   26685 logs.go:276] 0 containers: []
	W0226 03:34:32.369752   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:34:32.369825   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:34:32.400245   26685 logs.go:276] 0 containers: []
	W0226 03:34:32.400273   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:34:32.400347   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:34:32.419710   26685 logs.go:276] 0 containers: []
	W0226 03:34:32.419747   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:34:32.419893   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:34:32.441675   26685 logs.go:276] 0 containers: []
	W0226 03:34:32.441690   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:34:32.441759   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:34:32.460553   26685 logs.go:276] 0 containers: []
	W0226 03:34:32.460574   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:34:32.460659   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:34:32.478063   26685 logs.go:276] 0 containers: []
	W0226 03:34:32.478079   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:34:32.478147   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:34:32.496017   26685 logs.go:276] 0 containers: []
	W0226 03:34:32.496035   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:34:32.496045   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:34:32.496054   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:34:32.517954   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:34:32.517970   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:34:32.587915   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:34:32.587931   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:34:32.633289   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:34:32.633309   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:34:32.660734   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:34:32.660750   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:34:32.723135   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:34:35.223354   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:34:35.241773   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:34:35.258515   26685 logs.go:276] 0 containers: []
	W0226 03:34:35.258530   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:34:35.258599   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:34:35.275046   26685 logs.go:276] 0 containers: []
	W0226 03:34:35.275065   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:34:35.275121   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:34:35.299355   26685 logs.go:276] 0 containers: []
	W0226 03:34:35.299389   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:34:35.299588   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:34:35.321317   26685 logs.go:276] 0 containers: []
	W0226 03:34:35.321341   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:34:35.321429   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:34:35.340314   26685 logs.go:276] 0 containers: []
	W0226 03:34:35.340335   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:34:35.340417   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:34:35.358196   26685 logs.go:276] 0 containers: []
	W0226 03:34:35.358210   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:34:35.358274   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:34:35.376047   26685 logs.go:276] 0 containers: []
	W0226 03:34:35.376061   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:34:35.376118   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:34:35.394283   26685 logs.go:276] 0 containers: []
	W0226 03:34:35.394298   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:34:35.394305   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:34:35.394312   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:34:35.457461   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:34:35.457480   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:34:35.505609   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:34:35.505631   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:34:35.527182   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:34:35.527203   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:34:35.593690   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:34:35.593701   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:34:35.593709   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:34:38.130327   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:34:38.147560   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:34:38.165642   26685 logs.go:276] 0 containers: []
	W0226 03:34:38.165657   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:34:38.165728   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:34:38.182368   26685 logs.go:276] 0 containers: []
	W0226 03:34:38.182381   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:34:38.182447   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:34:38.199732   26685 logs.go:276] 0 containers: []
	W0226 03:34:38.199746   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:34:38.199820   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:34:38.217727   26685 logs.go:276] 0 containers: []
	W0226 03:34:38.217740   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:34:38.217811   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:34:38.234058   26685 logs.go:276] 0 containers: []
	W0226 03:34:38.234080   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:34:38.234179   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:34:38.252178   26685 logs.go:276] 0 containers: []
	W0226 03:34:38.252194   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:34:38.252268   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:34:38.268650   26685 logs.go:276] 0 containers: []
	W0226 03:34:38.268664   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:34:38.268735   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:34:38.286397   26685 logs.go:276] 0 containers: []
	W0226 03:34:38.286412   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:34:38.286421   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:34:38.286430   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:34:38.306168   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:34:38.306184   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:34:38.370775   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:34:38.370786   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:34:38.370794   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:34:38.391852   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:34:38.391868   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:34:38.454343   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:34:38.454356   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:34:40.998022   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:34:41.014436   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:34:41.032892   26685 logs.go:276] 0 containers: []
	W0226 03:34:41.032907   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:34:41.032972   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:34:41.050172   26685 logs.go:276] 0 containers: []
	W0226 03:34:41.050189   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:34:41.050267   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:34:41.068046   26685 logs.go:276] 0 containers: []
	W0226 03:34:41.068064   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:34:41.068134   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:34:41.085989   26685 logs.go:276] 0 containers: []
	W0226 03:34:41.085999   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:34:41.086072   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:34:41.102842   26685 logs.go:276] 0 containers: []
	W0226 03:34:41.102860   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:34:41.102936   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:34:41.119708   26685 logs.go:276] 0 containers: []
	W0226 03:34:41.119723   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:34:41.119794   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:34:41.139299   26685 logs.go:276] 0 containers: []
	W0226 03:34:41.139315   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:34:41.139384   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:34:41.157261   26685 logs.go:276] 0 containers: []
	W0226 03:34:41.157279   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:34:41.157286   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:34:41.157294   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:34:41.203649   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:34:41.203666   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:34:41.224910   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:34:41.224925   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:34:41.291193   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:34:41.291203   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:34:41.291215   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:34:41.313091   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:34:41.313106   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:34:43.876288   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:34:43.892880   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:34:43.910148   26685 logs.go:276] 0 containers: []
	W0226 03:34:43.910163   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:34:43.910241   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:34:43.926549   26685 logs.go:276] 0 containers: []
	W0226 03:34:43.926564   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:34:43.926632   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:34:43.943326   26685 logs.go:276] 0 containers: []
	W0226 03:34:43.943337   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:34:43.943404   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:34:43.960041   26685 logs.go:276] 0 containers: []
	W0226 03:34:43.960056   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:34:43.960133   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:34:43.976642   26685 logs.go:276] 0 containers: []
	W0226 03:34:43.976656   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:34:43.976724   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:34:43.993786   26685 logs.go:276] 0 containers: []
	W0226 03:34:43.993800   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:34:43.993867   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:34:44.011012   26685 logs.go:276] 0 containers: []
	W0226 03:34:44.011027   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:34:44.011112   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:34:44.029779   26685 logs.go:276] 0 containers: []
	W0226 03:34:44.029794   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:34:44.029800   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:34:44.029807   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:34:44.095145   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:34:44.095156   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:34:44.095164   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:34:44.116218   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:34:44.116233   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:34:44.176595   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:34:44.176613   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:34:44.218755   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:34:44.218771   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:34:46.740207   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:34:46.759749   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:34:46.779611   26685 logs.go:276] 0 containers: []
	W0226 03:34:46.779626   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:34:46.779696   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:34:46.800803   26685 logs.go:276] 0 containers: []
	W0226 03:34:46.800817   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:34:46.800886   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:34:46.819340   26685 logs.go:276] 0 containers: []
	W0226 03:34:46.819354   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:34:46.819420   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:34:46.836550   26685 logs.go:276] 0 containers: []
	W0226 03:34:46.836564   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:34:46.836642   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:34:46.855383   26685 logs.go:276] 0 containers: []
	W0226 03:34:46.855398   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:34:46.855465   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:34:46.875018   26685 logs.go:276] 0 containers: []
	W0226 03:34:46.875035   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:34:46.875131   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:34:46.896810   26685 logs.go:276] 0 containers: []
	W0226 03:34:46.896825   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:34:46.896891   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:34:46.915252   26685 logs.go:276] 0 containers: []
	W0226 03:34:46.915266   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:34:46.915273   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:34:46.915280   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:34:46.958115   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:34:46.958138   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:34:46.984574   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:34:46.984591   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:34:47.053651   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:34:47.053665   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:34:47.053678   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:34:47.076729   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:34:47.076748   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:34:49.643832   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:34:49.662456   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:34:49.679387   26685 logs.go:276] 0 containers: []
	W0226 03:34:49.679402   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:34:49.679464   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:34:49.696833   26685 logs.go:276] 0 containers: []
	W0226 03:34:49.696848   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:34:49.696919   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:34:49.715898   26685 logs.go:276] 0 containers: []
	W0226 03:34:49.715914   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:34:49.715982   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:34:49.733649   26685 logs.go:276] 0 containers: []
	W0226 03:34:49.733663   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:34:49.733735   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:34:49.750727   26685 logs.go:276] 0 containers: []
	W0226 03:34:49.750742   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:34:49.750813   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:34:49.769388   26685 logs.go:276] 0 containers: []
	W0226 03:34:49.784170   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:34:49.784248   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:34:49.803831   26685 logs.go:276] 0 containers: []
	W0226 03:34:49.803845   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:34:49.803915   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:34:49.822395   26685 logs.go:276] 0 containers: []
	W0226 03:34:49.822410   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:34:49.822418   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:34:49.822425   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:34:49.865341   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:34:49.865360   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:34:49.886080   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:34:49.886137   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:34:49.960987   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:34:49.961000   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:34:49.961007   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:34:49.981690   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:34:49.981705   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:34:52.544558   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:34:52.563862   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:34:52.581436   26685 logs.go:276] 0 containers: []
	W0226 03:34:52.581451   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:34:52.581518   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:34:52.598236   26685 logs.go:276] 0 containers: []
	W0226 03:34:52.598251   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:34:52.598319   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:34:52.617461   26685 logs.go:276] 0 containers: []
	W0226 03:34:52.617476   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:34:52.617544   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:34:52.634931   26685 logs.go:276] 0 containers: []
	W0226 03:34:52.634947   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:34:52.635010   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:34:52.653278   26685 logs.go:276] 0 containers: []
	W0226 03:34:52.653292   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:34:52.653358   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:34:52.672427   26685 logs.go:276] 0 containers: []
	W0226 03:34:52.672441   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:34:52.672507   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:34:52.690001   26685 logs.go:276] 0 containers: []
	W0226 03:34:52.690016   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:34:52.690081   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:34:52.709119   26685 logs.go:276] 0 containers: []
	W0226 03:34:52.709136   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:34:52.709145   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:34:52.709154   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:34:52.751402   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:34:52.751419   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:34:52.772036   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:34:52.772052   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:34:52.852539   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:34:52.852561   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:34:52.852581   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:34:52.905856   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:34:52.905872   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:34:55.470464   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:34:55.487364   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:34:55.504455   26685 logs.go:276] 0 containers: []
	W0226 03:34:55.504470   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:34:55.504536   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:34:55.523070   26685 logs.go:276] 0 containers: []
	W0226 03:34:55.523084   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:34:55.523147   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:34:55.541959   26685 logs.go:276] 0 containers: []
	W0226 03:34:55.541973   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:34:55.542043   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:34:55.559140   26685 logs.go:276] 0 containers: []
	W0226 03:34:55.559158   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:34:55.559240   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:34:55.577995   26685 logs.go:276] 0 containers: []
	W0226 03:34:55.578008   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:34:55.578079   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:34:55.596458   26685 logs.go:276] 0 containers: []
	W0226 03:34:55.596473   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:34:55.596536   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:34:55.615442   26685 logs.go:276] 0 containers: []
	W0226 03:34:55.615456   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:34:55.615537   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:34:55.632158   26685 logs.go:276] 0 containers: []
	W0226 03:34:55.632182   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:34:55.632193   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:34:55.632201   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:34:55.673030   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:34:55.673046   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:34:55.692662   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:34:55.692689   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:34:55.757190   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:34:55.757205   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:34:55.757215   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:34:55.778538   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:34:55.778552   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:34:58.341316   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:34:58.361108   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:34:58.380718   26685 logs.go:276] 0 containers: []
	W0226 03:34:58.380733   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:34:58.380788   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:34:58.400562   26685 logs.go:276] 0 containers: []
	W0226 03:34:58.400578   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:34:58.400652   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:34:58.419938   26685 logs.go:276] 0 containers: []
	W0226 03:34:58.419953   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:34:58.420018   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:34:58.437289   26685 logs.go:276] 0 containers: []
	W0226 03:34:58.437308   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:34:58.437377   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:34:58.456094   26685 logs.go:276] 0 containers: []
	W0226 03:34:58.456109   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:34:58.456175   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:34:58.476218   26685 logs.go:276] 0 containers: []
	W0226 03:34:58.476243   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:34:58.476336   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:34:58.494514   26685 logs.go:276] 0 containers: []
	W0226 03:34:58.494529   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:34:58.494597   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:34:58.512081   26685 logs.go:276] 0 containers: []
	W0226 03:34:58.512094   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:34:58.512101   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:34:58.512108   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:34:58.575519   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:34:58.575532   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:34:58.575542   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:34:58.596603   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:34:58.596617   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:34:58.658552   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:34:58.658567   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:34:58.700436   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:34:58.700456   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:35:01.222300   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:35:01.241020   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:35:01.259925   26685 logs.go:276] 0 containers: []
	W0226 03:35:01.259940   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:35:01.260012   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:35:01.278333   26685 logs.go:276] 0 containers: []
	W0226 03:35:01.278364   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:35:01.278443   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:35:01.296747   26685 logs.go:276] 0 containers: []
	W0226 03:35:01.296762   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:35:01.296829   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:35:01.316803   26685 logs.go:276] 0 containers: []
	W0226 03:35:01.316818   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:35:01.316889   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:35:01.335630   26685 logs.go:276] 0 containers: []
	W0226 03:35:01.335646   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:35:01.335715   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:35:01.356229   26685 logs.go:276] 0 containers: []
	W0226 03:35:01.356254   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:35:01.356353   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:35:01.377791   26685 logs.go:276] 0 containers: []
	W0226 03:35:01.377807   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:35:01.377877   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:35:01.408838   26685 logs.go:276] 0 containers: []
	W0226 03:35:01.408854   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:35:01.408875   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:35:01.408884   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:35:01.433201   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:35:01.433219   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:35:01.546869   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:35:01.546882   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:35:01.587370   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:35:01.587385   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:35:01.608148   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:35:01.608164   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:35:01.675533   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:35:04.176436   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:35:04.194975   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:35:04.213592   26685 logs.go:276] 0 containers: []
	W0226 03:35:04.213607   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:35:04.213676   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:35:04.231675   26685 logs.go:276] 0 containers: []
	W0226 03:35:04.231691   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:35:04.231760   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:35:04.249641   26685 logs.go:276] 0 containers: []
	W0226 03:35:04.249656   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:35:04.249717   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:35:04.268974   26685 logs.go:276] 0 containers: []
	W0226 03:35:04.268989   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:35:04.269055   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:35:04.288176   26685 logs.go:276] 0 containers: []
	W0226 03:35:04.288191   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:35:04.288261   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:35:04.307382   26685 logs.go:276] 0 containers: []
	W0226 03:35:04.307396   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:35:04.307463   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:35:04.329196   26685 logs.go:276] 0 containers: []
	W0226 03:35:04.329213   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:35:04.329294   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:35:04.353769   26685 logs.go:276] 0 containers: []
	W0226 03:35:04.353792   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:35:04.353808   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:35:04.353823   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:35:04.377570   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:35:04.377587   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:35:04.459174   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:35:04.459190   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:35:04.459200   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:35:04.486407   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:35:04.486430   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:35:04.561757   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:35:04.561780   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:35:07.113748   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:35:07.131939   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:35:07.151619   26685 logs.go:276] 0 containers: []
	W0226 03:35:07.151635   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:35:07.151703   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:35:07.171437   26685 logs.go:276] 0 containers: []
	W0226 03:35:07.171456   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:35:07.171529   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:35:07.190799   26685 logs.go:276] 0 containers: []
	W0226 03:35:07.190814   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:35:07.190886   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:35:07.209820   26685 logs.go:276] 0 containers: []
	W0226 03:35:07.209834   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:35:07.209895   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:35:07.228707   26685 logs.go:276] 0 containers: []
	W0226 03:35:07.228722   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:35:07.228790   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:35:07.246511   26685 logs.go:276] 0 containers: []
	W0226 03:35:07.246526   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:35:07.246593   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:35:07.266869   26685 logs.go:276] 0 containers: []
	W0226 03:35:07.266882   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:35:07.266953   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:35:07.286834   26685 logs.go:276] 0 containers: []
	W0226 03:35:07.286849   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:35:07.286856   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:35:07.286870   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:35:07.328302   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:35:07.328317   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:35:07.349122   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:35:07.349137   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:35:07.414895   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:35:07.414906   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:35:07.414914   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:35:07.435784   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:35:07.435799   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:35:10.000114   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:35:10.018997   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:35:10.037605   26685 logs.go:276] 0 containers: []
	W0226 03:35:10.037619   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:35:10.037687   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:35:10.055886   26685 logs.go:276] 0 containers: []
	W0226 03:35:10.055899   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:35:10.055967   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:35:10.074659   26685 logs.go:276] 0 containers: []
	W0226 03:35:10.074674   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:35:10.074745   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:35:10.093157   26685 logs.go:276] 0 containers: []
	W0226 03:35:10.093173   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:35:10.093242   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:35:10.112196   26685 logs.go:276] 0 containers: []
	W0226 03:35:10.112212   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:35:10.112282   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:35:10.131413   26685 logs.go:276] 0 containers: []
	W0226 03:35:10.131428   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:35:10.131506   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:35:10.151821   26685 logs.go:276] 0 containers: []
	W0226 03:35:10.151836   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:35:10.151909   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:35:10.171744   26685 logs.go:276] 0 containers: []
	W0226 03:35:10.171759   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:35:10.171766   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:35:10.171773   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:35:10.213893   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:35:10.213907   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:35:10.277212   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:35:10.277228   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:35:10.319027   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:35:10.319043   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:35:10.338541   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:35:10.338556   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:35:10.405103   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:35:12.906901   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:35:12.924218   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:35:12.943227   26685 logs.go:276] 0 containers: []
	W0226 03:35:12.943241   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:35:12.943302   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:35:12.962047   26685 logs.go:276] 0 containers: []
	W0226 03:35:12.962062   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:35:12.962129   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:35:12.980448   26685 logs.go:276] 0 containers: []
	W0226 03:35:12.980471   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:35:12.980562   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:35:13.000900   26685 logs.go:276] 0 containers: []
	W0226 03:35:13.000917   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:35:13.000983   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:35:13.019242   26685 logs.go:276] 0 containers: []
	W0226 03:35:13.019255   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:35:13.019320   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:35:13.039160   26685 logs.go:276] 0 containers: []
	W0226 03:35:13.039174   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:35:13.039240   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:35:13.057523   26685 logs.go:276] 0 containers: []
	W0226 03:35:13.057540   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:35:13.057628   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:35:13.076555   26685 logs.go:276] 0 containers: []
	W0226 03:35:13.076579   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:35:13.076588   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:35:13.076597   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:35:13.119167   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:35:13.119182   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:35:13.140024   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:35:13.140041   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:35:13.205945   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:35:13.205957   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:35:13.205965   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:35:13.227085   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:35:13.227102   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:35:15.788819   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:35:15.806016   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:35:15.825328   26685 logs.go:276] 0 containers: []
	W0226 03:35:15.825342   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:35:15.825409   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:35:15.844156   26685 logs.go:276] 0 containers: []
	W0226 03:35:15.844172   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:35:15.844247   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:35:15.865319   26685 logs.go:276] 0 containers: []
	W0226 03:35:15.865336   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:35:15.865414   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:35:15.896670   26685 logs.go:276] 0 containers: []
	W0226 03:35:15.896683   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:35:15.896753   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:35:15.915948   26685 logs.go:276] 0 containers: []
	W0226 03:35:15.915965   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:35:15.916036   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:35:15.937031   26685 logs.go:276] 0 containers: []
	W0226 03:35:15.937046   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:35:15.937119   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:35:16.001624   26685 logs.go:276] 0 containers: []
	W0226 03:35:16.001640   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:35:16.001720   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:35:16.020258   26685 logs.go:276] 0 containers: []
	W0226 03:35:16.020274   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:35:16.020282   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:35:16.020289   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:35:16.060786   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:35:16.060802   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:35:16.081223   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:35:16.081241   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:35:16.160537   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:35:16.160549   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:35:16.160561   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:35:16.181645   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:35:16.181659   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:35:18.746075   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:35:18.764382   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:35:18.782627   26685 logs.go:276] 0 containers: []
	W0226 03:35:18.782641   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:35:18.782708   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:35:18.803590   26685 logs.go:276] 0 containers: []
	W0226 03:35:18.803605   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:35:18.803671   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:35:18.821826   26685 logs.go:276] 0 containers: []
	W0226 03:35:18.821840   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:35:18.821907   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:35:18.839511   26685 logs.go:276] 0 containers: []
	W0226 03:35:18.839526   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:35:18.839594   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:35:18.857774   26685 logs.go:276] 0 containers: []
	W0226 03:35:18.857789   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:35:18.857860   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:35:18.874780   26685 logs.go:276] 0 containers: []
	W0226 03:35:18.874793   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:35:18.874859   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:35:18.893296   26685 logs.go:276] 0 containers: []
	W0226 03:35:18.893310   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:35:18.893378   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:35:18.912009   26685 logs.go:276] 0 containers: []
	W0226 03:35:18.912023   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:35:18.912031   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:35:18.912037   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:35:18.978353   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:35:18.978366   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:35:18.978374   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:35:18.999779   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:35:18.999796   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:35:19.062997   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:35:19.063013   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:35:19.105014   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:35:19.105029   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:35:21.624738   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:35:21.643469   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:35:21.714546   26685 logs.go:276] 0 containers: []
	W0226 03:35:21.714567   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:35:21.714637   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:35:21.732512   26685 logs.go:276] 0 containers: []
	W0226 03:35:21.732550   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:35:21.732630   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:35:21.751331   26685 logs.go:276] 0 containers: []
	W0226 03:35:21.751347   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:35:21.751419   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:35:21.771210   26685 logs.go:276] 0 containers: []
	W0226 03:35:21.771229   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:35:21.771298   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:35:21.790061   26685 logs.go:276] 0 containers: []
	W0226 03:35:21.790075   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:35:21.790146   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:35:21.808937   26685 logs.go:276] 0 containers: []
	W0226 03:35:21.808952   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:35:21.809019   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:35:21.827576   26685 logs.go:276] 0 containers: []
	W0226 03:35:21.827590   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:35:21.827656   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:35:21.847765   26685 logs.go:276] 0 containers: []
	W0226 03:35:21.847781   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:35:21.847788   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:35:21.847795   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:35:21.888283   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:35:21.888299   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:35:21.909410   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:35:21.909426   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:35:21.980572   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:35:21.980586   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:35:21.980594   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:35:22.001372   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:35:22.001386   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:35:24.566065   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:35:24.584184   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:35:24.603311   26685 logs.go:276] 0 containers: []
	W0226 03:35:24.603326   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:35:24.603394   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:35:24.622050   26685 logs.go:276] 0 containers: []
	W0226 03:35:24.622065   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:35:24.622134   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:35:24.639518   26685 logs.go:276] 0 containers: []
	W0226 03:35:24.639531   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:35:24.639599   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:35:24.660439   26685 logs.go:276] 0 containers: []
	W0226 03:35:24.660455   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:35:24.660522   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:35:24.678808   26685 logs.go:276] 0 containers: []
	W0226 03:35:24.678822   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:35:24.678893   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:35:24.696284   26685 logs.go:276] 0 containers: []
	W0226 03:35:24.696298   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:35:24.696370   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:35:24.714982   26685 logs.go:276] 0 containers: []
	W0226 03:35:24.714996   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:35:24.715061   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:35:24.733810   26685 logs.go:276] 0 containers: []
	W0226 03:35:24.733824   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:35:24.733832   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:35:24.733838   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:35:24.776047   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:35:24.783379   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:35:24.804729   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:35:24.804746   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:35:24.873865   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:35:24.873877   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:35:24.873886   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:35:24.894624   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:35:24.894641   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:35:27.459602   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:35:27.477006   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:35:27.495546   26685 logs.go:276] 0 containers: []
	W0226 03:35:27.495562   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:35:27.495629   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:35:27.513326   26685 logs.go:276] 0 containers: []
	W0226 03:35:27.513340   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:35:27.513405   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:35:27.530874   26685 logs.go:276] 0 containers: []
	W0226 03:35:27.530889   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:35:27.530967   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:35:27.548659   26685 logs.go:276] 0 containers: []
	W0226 03:35:27.548677   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:35:27.548752   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:35:27.566900   26685 logs.go:276] 0 containers: []
	W0226 03:35:27.566913   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:35:27.566982   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:35:27.586397   26685 logs.go:276] 0 containers: []
	W0226 03:35:27.586411   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:35:27.586478   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:35:27.606217   26685 logs.go:276] 0 containers: []
	W0226 03:35:27.606232   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:35:27.606303   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:35:27.626357   26685 logs.go:276] 0 containers: []
	W0226 03:35:27.626373   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:35:27.626380   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:35:27.626387   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:35:27.646725   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:35:27.646741   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:35:27.723007   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:35:27.723026   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:35:27.723033   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:35:27.744921   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:35:27.744935   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:35:27.808454   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:35:27.808470   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:35:30.349207   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:35:30.367386   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:35:30.385858   26685 logs.go:276] 0 containers: []
	W0226 03:35:30.385873   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:35:30.385941   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:35:30.404411   26685 logs.go:276] 0 containers: []
	W0226 03:35:30.404424   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:35:30.404489   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:35:30.423202   26685 logs.go:276] 0 containers: []
	W0226 03:35:30.423219   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:35:30.423290   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:35:30.443265   26685 logs.go:276] 0 containers: []
	W0226 03:35:30.443280   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:35:30.443348   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:35:30.462540   26685 logs.go:276] 0 containers: []
	W0226 03:35:30.462555   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:35:30.462630   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:35:30.482012   26685 logs.go:276] 0 containers: []
	W0226 03:35:30.482026   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:35:30.482091   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:35:30.500958   26685 logs.go:276] 0 containers: []
	W0226 03:35:30.500974   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:35:30.501043   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:35:30.518732   26685 logs.go:276] 0 containers: []
	W0226 03:35:30.518747   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:35:30.518754   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:35:30.518760   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:35:30.540120   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:35:30.540135   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:35:30.605064   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:35:30.605077   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:35:30.605085   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:35:30.629029   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:35:30.629044   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:35:30.734163   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:35:30.734180   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:35:33.275261   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:35:33.292186   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:35:33.309491   26685 logs.go:276] 0 containers: []
	W0226 03:35:33.309506   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:35:33.309574   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:35:33.328641   26685 logs.go:276] 0 containers: []
	W0226 03:35:33.328660   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:35:33.328737   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:35:33.346034   26685 logs.go:276] 0 containers: []
	W0226 03:35:33.346047   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:35:33.346115   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:35:33.364478   26685 logs.go:276] 0 containers: []
	W0226 03:35:33.364493   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:35:33.364564   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:35:33.382472   26685 logs.go:276] 0 containers: []
	W0226 03:35:33.382486   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:35:33.382547   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:35:33.401053   26685 logs.go:276] 0 containers: []
	W0226 03:35:33.401067   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:35:33.401132   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:35:33.419135   26685 logs.go:276] 0 containers: []
	W0226 03:35:33.419150   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:35:33.419213   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:35:33.437572   26685 logs.go:276] 0 containers: []
	W0226 03:35:33.437587   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:35:33.437594   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:35:33.437600   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:35:33.478393   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:35:33.478408   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:35:33.499315   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:35:33.499331   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:35:33.564327   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:35:33.564339   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:35:33.564346   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:35:33.585728   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:35:33.585743   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:35:36.153942   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:35:36.170493   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:35:36.189618   26685 logs.go:276] 0 containers: []
	W0226 03:35:36.189632   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:35:36.189699   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:35:36.208611   26685 logs.go:276] 0 containers: []
	W0226 03:35:36.208626   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:35:36.208693   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:35:36.227034   26685 logs.go:276] 0 containers: []
	W0226 03:35:36.227050   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:35:36.227119   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:35:36.245534   26685 logs.go:276] 0 containers: []
	W0226 03:35:36.245548   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:35:36.245612   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:35:36.264640   26685 logs.go:276] 0 containers: []
	W0226 03:35:36.264655   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:35:36.264726   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:35:36.283148   26685 logs.go:276] 0 containers: []
	W0226 03:35:36.283163   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:35:36.283228   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:35:36.300843   26685 logs.go:276] 0 containers: []
	W0226 03:35:36.300858   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:35:36.300928   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:35:36.320000   26685 logs.go:276] 0 containers: []
	W0226 03:35:36.320015   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:35:36.320022   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:35:36.320029   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:35:36.386390   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:35:36.386401   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:35:36.386409   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:35:36.407297   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:35:36.407311   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:35:36.470653   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:35:36.470667   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:35:36.511704   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:35:36.511720   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:35:39.033057   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:35:39.052825   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:35:39.071267   26685 logs.go:276] 0 containers: []
	W0226 03:35:39.071281   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:35:39.071344   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:35:39.090960   26685 logs.go:276] 0 containers: []
	W0226 03:35:39.090972   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:35:39.091035   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:35:39.108523   26685 logs.go:276] 0 containers: []
	W0226 03:35:39.108537   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:35:39.108604   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:35:39.127206   26685 logs.go:276] 0 containers: []
	W0226 03:35:39.127221   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:35:39.127289   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:35:39.145811   26685 logs.go:276] 0 containers: []
	W0226 03:35:39.145827   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:35:39.145902   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:35:39.164125   26685 logs.go:276] 0 containers: []
	W0226 03:35:39.164140   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:35:39.164221   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:35:39.182189   26685 logs.go:276] 0 containers: []
	W0226 03:35:39.182204   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:35:39.182268   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:35:39.202142   26685 logs.go:276] 0 containers: []
	W0226 03:35:39.202157   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:35:39.202164   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:35:39.202173   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:35:39.242689   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:35:39.242703   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:35:39.263800   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:35:39.263818   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:35:39.332107   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:35:39.332118   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:35:39.332126   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:35:39.353926   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:35:39.353942   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:35:41.920968   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:35:41.938343   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:35:41.956284   26685 logs.go:276] 0 containers: []
	W0226 03:35:41.956299   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:35:41.956365   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:35:41.975169   26685 logs.go:276] 0 containers: []
	W0226 03:35:41.975186   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:35:41.975253   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:35:41.994017   26685 logs.go:276] 0 containers: []
	W0226 03:35:41.994031   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:35:41.994097   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:35:42.012080   26685 logs.go:276] 0 containers: []
	W0226 03:35:42.012095   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:35:42.012162   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:35:42.031084   26685 logs.go:276] 0 containers: []
	W0226 03:35:42.031099   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:35:42.031174   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:35:42.049391   26685 logs.go:276] 0 containers: []
	W0226 03:35:42.049405   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:35:42.049472   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:35:42.067293   26685 logs.go:276] 0 containers: []
	W0226 03:35:42.067309   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:35:42.067375   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:35:42.085345   26685 logs.go:276] 0 containers: []
	W0226 03:35:42.085357   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:35:42.085364   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:35:42.085370   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:35:42.126620   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:35:42.126635   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:35:42.145916   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:35:42.145936   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:35:42.211108   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:35:42.211120   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:35:42.211129   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:35:42.231656   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:35:42.231670   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:35:44.794934   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:35:44.811462   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:35:44.829684   26685 logs.go:276] 0 containers: []
	W0226 03:35:44.829699   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:35:44.829768   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:35:44.848423   26685 logs.go:276] 0 containers: []
	W0226 03:35:44.848439   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:35:44.848506   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:35:44.866060   26685 logs.go:276] 0 containers: []
	W0226 03:35:44.866073   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:35:44.866152   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:35:44.884595   26685 logs.go:276] 0 containers: []
	W0226 03:35:44.884609   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:35:44.884676   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:35:44.906166   26685 logs.go:276] 0 containers: []
	W0226 03:35:44.906182   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:35:44.906251   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:35:44.925055   26685 logs.go:276] 0 containers: []
	W0226 03:35:44.925070   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:35:44.925138   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:35:44.943590   26685 logs.go:276] 0 containers: []
	W0226 03:35:44.943606   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:35:44.943674   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:35:44.962902   26685 logs.go:276] 0 containers: []
	W0226 03:35:44.962923   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:35:44.962934   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:35:44.962944   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:35:45.005458   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:35:45.005473   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:35:45.025309   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:35:45.025327   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:35:45.103384   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:35:45.103397   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:35:45.103405   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:35:45.127237   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:35:45.127253   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:35:47.707573   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:35:47.725322   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:35:47.744348   26685 logs.go:276] 0 containers: []
	W0226 03:35:47.744364   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:35:47.744437   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:35:47.763609   26685 logs.go:276] 0 containers: []
	W0226 03:35:47.763623   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:35:47.763686   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:35:47.782212   26685 logs.go:276] 0 containers: []
	W0226 03:35:47.782226   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:35:47.782298   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:35:47.801067   26685 logs.go:276] 0 containers: []
	W0226 03:35:47.801080   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:35:47.801140   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:35:47.819583   26685 logs.go:276] 0 containers: []
	W0226 03:35:47.819599   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:35:47.819665   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:35:47.838050   26685 logs.go:276] 0 containers: []
	W0226 03:35:47.838065   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:35:47.838137   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:35:47.857005   26685 logs.go:276] 0 containers: []
	W0226 03:35:47.857018   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:35:47.857081   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:35:47.874711   26685 logs.go:276] 0 containers: []
	W0226 03:35:47.874725   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:35:47.874734   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:35:47.874743   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:35:47.939322   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:35:47.939333   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:35:47.939340   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:35:47.960556   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:35:47.960570   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:35:48.023082   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:35:48.023097   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:35:48.063186   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:35:48.063202   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:35:50.583724   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:35:50.601444   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:35:50.619710   26685 logs.go:276] 0 containers: []
	W0226 03:35:50.619726   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:35:50.619793   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:35:50.637915   26685 logs.go:276] 0 containers: []
	W0226 03:35:50.637932   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:35:50.638003   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:35:50.657126   26685 logs.go:276] 0 containers: []
	W0226 03:35:50.657140   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:35:50.657207   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:35:50.675206   26685 logs.go:276] 0 containers: []
	W0226 03:35:50.675227   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:35:50.675312   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:35:50.693936   26685 logs.go:276] 0 containers: []
	W0226 03:35:50.693958   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:35:50.694036   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:35:50.712403   26685 logs.go:276] 0 containers: []
	W0226 03:35:50.712418   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:35:50.712489   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:35:50.732728   26685 logs.go:276] 0 containers: []
	W0226 03:35:50.732743   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:35:50.732807   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:35:50.750435   26685 logs.go:276] 0 containers: []
	W0226 03:35:50.750450   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:35:50.750458   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:35:50.750464   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:35:50.791184   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:35:50.791201   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:35:50.810986   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:35:50.811016   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:35:50.902593   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:35:50.902605   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:35:50.902615   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:35:50.925581   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:35:50.925597   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:35:53.489492   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:35:53.507503   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:35:53.525960   26685 logs.go:276] 0 containers: []
	W0226 03:35:53.525980   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:35:53.526079   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:35:53.545909   26685 logs.go:276] 0 containers: []
	W0226 03:35:53.545927   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:35:53.546016   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:35:53.564462   26685 logs.go:276] 0 containers: []
	W0226 03:35:53.564476   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:35:53.564545   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:35:53.582540   26685 logs.go:276] 0 containers: []
	W0226 03:35:53.582556   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:35:53.582624   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:35:53.600363   26685 logs.go:276] 0 containers: []
	W0226 03:35:53.600376   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:35:53.600455   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:35:53.618326   26685 logs.go:276] 0 containers: []
	W0226 03:35:53.618340   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:35:53.618409   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:35:53.636289   26685 logs.go:276] 0 containers: []
	W0226 03:35:53.636304   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:35:53.636374   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:35:53.654276   26685 logs.go:276] 0 containers: []
	W0226 03:35:53.654291   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:35:53.654299   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:35:53.654306   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:35:53.696935   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:35:53.696951   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:35:53.716607   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:35:53.716623   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:35:53.779920   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:35:53.779929   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:35:53.779937   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:35:53.800760   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:35:53.800775   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:35:56.364063   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:35:56.381735   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:35:56.401530   26685 logs.go:276] 0 containers: []
	W0226 03:35:56.401545   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:35:56.401608   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:35:56.419095   26685 logs.go:276] 0 containers: []
	W0226 03:35:56.419110   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:35:56.419178   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:35:56.438768   26685 logs.go:276] 0 containers: []
	W0226 03:35:56.438781   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:35:56.438851   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:35:56.458234   26685 logs.go:276] 0 containers: []
	W0226 03:35:56.458253   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:35:56.458334   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:35:56.477570   26685 logs.go:276] 0 containers: []
	W0226 03:35:56.477584   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:35:56.477649   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:35:56.496556   26685 logs.go:276] 0 containers: []
	W0226 03:35:56.496572   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:35:56.496636   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:35:56.514630   26685 logs.go:276] 0 containers: []
	W0226 03:35:56.514644   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:35:56.514714   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:35:56.532615   26685 logs.go:276] 0 containers: []
	W0226 03:35:56.532630   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:35:56.532637   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:35:56.532644   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:35:56.574349   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:35:56.574366   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:35:56.593676   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:35:56.593692   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:35:56.670540   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:35:56.670552   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:35:56.670559   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:35:56.692537   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:35:56.692556   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:35:59.254171   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:35:59.270566   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:35:59.291079   26685 logs.go:276] 0 containers: []
	W0226 03:35:59.291098   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:35:59.291172   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:35:59.309242   26685 logs.go:276] 0 containers: []
	W0226 03:35:59.309267   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:35:59.309376   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:35:59.327174   26685 logs.go:276] 0 containers: []
	W0226 03:35:59.327188   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:35:59.327247   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:35:59.345820   26685 logs.go:276] 0 containers: []
	W0226 03:35:59.345836   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:35:59.345901   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:35:59.363421   26685 logs.go:276] 0 containers: []
	W0226 03:35:59.363435   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:35:59.363500   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:35:59.381697   26685 logs.go:276] 0 containers: []
	W0226 03:35:59.381713   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:35:59.381784   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:35:59.400295   26685 logs.go:276] 0 containers: []
	W0226 03:35:59.400311   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:35:59.400378   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:35:59.418862   26685 logs.go:276] 0 containers: []
	W0226 03:35:59.418877   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:35:59.418886   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:35:59.418893   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:35:59.483710   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:35:59.483726   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:35:59.525119   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:35:59.525133   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:35:59.544448   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:35:59.544463   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:35:59.611289   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:35:59.611301   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:35:59.611310   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:36:02.135244   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:36:02.159316   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:36:02.180300   26685 logs.go:276] 0 containers: []
	W0226 03:36:02.180316   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:36:02.180389   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:36:02.197364   26685 logs.go:276] 0 containers: []
	W0226 03:36:02.197378   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:36:02.197444   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:36:02.214200   26685 logs.go:276] 0 containers: []
	W0226 03:36:02.214216   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:36:02.214285   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:36:02.230712   26685 logs.go:276] 0 containers: []
	W0226 03:36:02.230728   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:36:02.230802   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:36:02.251035   26685 logs.go:276] 0 containers: []
	W0226 03:36:02.251114   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:36:02.251270   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:36:02.279984   26685 logs.go:276] 0 containers: []
	W0226 03:36:02.280004   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:36:02.280078   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:36:02.297393   26685 logs.go:276] 0 containers: []
	W0226 03:36:02.297407   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:36:02.297471   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:36:02.315045   26685 logs.go:276] 0 containers: []
	W0226 03:36:02.315063   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:36:02.315071   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:36:02.315078   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:36:02.334682   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:36:02.334698   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:36:02.409005   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:36:02.409020   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:36:02.409028   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:36:02.429624   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:36:02.429640   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:36:02.489416   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:36:02.489430   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:36:05.030604   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:36:05.047165   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:36:05.064634   26685 logs.go:276] 0 containers: []
	W0226 03:36:05.064650   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:36:05.064716   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:36:05.081572   26685 logs.go:276] 0 containers: []
	W0226 03:36:05.081586   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:36:05.081652   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:36:05.099205   26685 logs.go:276] 0 containers: []
	W0226 03:36:05.099220   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:36:05.099289   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:36:05.115579   26685 logs.go:276] 0 containers: []
	W0226 03:36:05.115594   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:36:05.115661   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:36:05.132389   26685 logs.go:276] 0 containers: []
	W0226 03:36:05.132406   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:36:05.132475   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:36:05.149076   26685 logs.go:276] 0 containers: []
	W0226 03:36:05.149098   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:36:05.149170   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:36:05.165039   26685 logs.go:276] 0 containers: []
	W0226 03:36:05.165054   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:36:05.165128   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:36:05.183799   26685 logs.go:276] 0 containers: []
	W0226 03:36:05.183814   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:36:05.183822   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:36:05.183830   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:36:05.224009   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:36:05.224024   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:36:05.243455   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:36:05.243472   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:36:05.303019   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:36:05.303030   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:36:05.303039   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:36:05.323987   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:36:05.324001   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:36:07.888456   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:36:07.904944   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:36:07.921584   26685 logs.go:276] 0 containers: []
	W0226 03:36:07.921597   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:36:07.921682   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:36:07.938199   26685 logs.go:276] 0 containers: []
	W0226 03:36:07.938215   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:36:07.938282   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:36:07.955235   26685 logs.go:276] 0 containers: []
	W0226 03:36:07.955249   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:36:07.955316   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:36:07.971356   26685 logs.go:276] 0 containers: []
	W0226 03:36:07.971370   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:36:07.971439   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:36:07.988949   26685 logs.go:276] 0 containers: []
	W0226 03:36:07.988963   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:36:07.989026   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:36:08.006614   26685 logs.go:276] 0 containers: []
	W0226 03:36:08.006628   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:36:08.006696   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:36:08.023293   26685 logs.go:276] 0 containers: []
	W0226 03:36:08.023306   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:36:08.023372   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:36:08.039243   26685 logs.go:276] 0 containers: []
	W0226 03:36:08.039259   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:36:08.039267   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:36:08.039274   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:36:08.079674   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:36:08.079689   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:36:08.098808   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:36:08.098824   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:36:08.159292   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:36:08.159304   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:36:08.159312   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:36:08.179864   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:36:08.179879   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:36:10.741749   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:36:10.760191   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:36:10.777109   26685 logs.go:276] 0 containers: []
	W0226 03:36:10.777126   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:36:10.777202   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:36:10.793727   26685 logs.go:276] 0 containers: []
	W0226 03:36:10.793742   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:36:10.793812   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:36:10.811359   26685 logs.go:276] 0 containers: []
	W0226 03:36:10.811375   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:36:10.811445   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:36:10.827423   26685 logs.go:276] 0 containers: []
	W0226 03:36:10.827445   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:36:10.827520   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:36:10.843460   26685 logs.go:276] 0 containers: []
	W0226 03:36:10.843474   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:36:10.843538   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:36:10.860398   26685 logs.go:276] 0 containers: []
	W0226 03:36:10.860412   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:36:10.860485   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:36:10.877202   26685 logs.go:276] 0 containers: []
	W0226 03:36:10.877217   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:36:10.877283   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:36:10.894157   26685 logs.go:276] 0 containers: []
	W0226 03:36:10.894176   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:36:10.894186   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:36:10.894195   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:36:10.912990   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:36:10.913006   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:36:11.016933   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:36:11.016945   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:36:11.016953   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:36:11.038215   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:36:11.038229   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:36:11.103125   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:36:11.103148   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:36:13.647766   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:36:13.664518   26685 kubeadm.go:640] restartCluster took 4m13.817771718s
	W0226 03:36:13.664573   26685 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0226 03:36:13.664599   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0226 03:36:14.090974   26685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 03:36:14.109305   26685 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0226 03:36:14.124871   26685 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0226 03:36:14.124930   26685 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0226 03:36:14.169080   26685 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0226 03:36:14.169109   26685 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0226 03:36:14.230640   26685 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0226 03:36:14.230862   26685 kubeadm.go:322] [preflight] Running pre-flight checks
	I0226 03:36:14.536024   26685 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0226 03:36:14.536125   26685 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0226 03:36:14.536219   26685 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0226 03:36:14.699643   26685 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0226 03:36:14.700324   26685 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0226 03:36:14.706877   26685 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0226 03:36:14.777843   26685 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0226 03:36:14.808241   26685 out.go:204]   - Generating certificates and keys ...
	I0226 03:36:14.808309   26685 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0226 03:36:14.808360   26685 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0226 03:36:14.808423   26685 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0226 03:36:14.808477   26685 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0226 03:36:14.808531   26685 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0226 03:36:14.808584   26685 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0226 03:36:14.808653   26685 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0226 03:36:14.808701   26685 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0226 03:36:14.808758   26685 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0226 03:36:14.808825   26685 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0226 03:36:14.808860   26685 kubeadm.go:322] [certs] Using the existing "sa" key
	I0226 03:36:14.808909   26685 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0226 03:36:14.840484   26685 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0226 03:36:14.951739   26685 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0226 03:36:15.003661   26685 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0226 03:36:15.283964   26685 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0226 03:36:15.284488   26685 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0226 03:36:15.327930   26685 out.go:204]   - Booting up control plane ...
	I0226 03:36:15.328100   26685 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0226 03:36:15.328226   26685 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0226 03:36:15.328344   26685 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0226 03:36:15.328500   26685 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0226 03:36:15.328754   26685 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0226 03:36:55.294046   26685 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0226 03:36:55.296699   26685 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:36:55.296880   26685 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:37:00.299625   26685 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:37:00.299851   26685 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:37:10.301527   26685 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:37:10.301755   26685 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:37:30.303743   26685 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:37:30.303901   26685 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:38:10.306593   26685 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:38:10.306876   26685 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:38:10.306889   26685 kubeadm.go:322] 
	I0226 03:38:10.306934   26685 kubeadm.go:322] Unfortunately, an error has occurred:
	I0226 03:38:10.306986   26685 kubeadm.go:322] 	timed out waiting for the condition
	I0226 03:38:10.306995   26685 kubeadm.go:322] 
	I0226 03:38:10.307031   26685 kubeadm.go:322] This error is likely caused by:
	I0226 03:38:10.307064   26685 kubeadm.go:322] 	- The kubelet is not running
	I0226 03:38:10.307169   26685 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0226 03:38:10.307177   26685 kubeadm.go:322] 
	I0226 03:38:10.307299   26685 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0226 03:38:10.307340   26685 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0226 03:38:10.307379   26685 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0226 03:38:10.307385   26685 kubeadm.go:322] 
	I0226 03:38:10.307500   26685 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0226 03:38:10.307603   26685 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0226 03:38:10.307685   26685 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0226 03:38:10.307743   26685 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0226 03:38:10.307832   26685 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0226 03:38:10.307861   26685 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0226 03:38:10.311660   26685 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0226 03:38:10.311734   26685 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0226 03:38:10.311872   26685 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0226 03:38:10.312033   26685 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0226 03:38:10.312155   26685 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0226 03:38:10.312222   26685 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0226 03:38:10.312280   26685 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0226 03:38:10.312319   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0226 03:38:10.730970   26685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 03:38:10.748229   26685 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0226 03:38:10.748289   26685 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0226 03:38:10.763194   26685 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0226 03:38:10.763217   26685 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0226 03:38:10.819600   26685 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0226 03:38:10.819653   26685 kubeadm.go:322] [preflight] Running pre-flight checks
	I0226 03:38:11.068302   26685 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0226 03:38:11.068398   26685 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0226 03:38:11.068473   26685 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0226 03:38:11.237646   26685 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0226 03:38:11.238480   26685 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0226 03:38:11.244961   26685 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0226 03:38:11.307467   26685 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0226 03:38:11.329098   26685 out.go:204]   - Generating certificates and keys ...
	I0226 03:38:11.329178   26685 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0226 03:38:11.329245   26685 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0226 03:38:11.329303   26685 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0226 03:38:11.329352   26685 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0226 03:38:11.329413   26685 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0226 03:38:11.329467   26685 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0226 03:38:11.329529   26685 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0226 03:38:11.329590   26685 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0226 03:38:11.329649   26685 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0226 03:38:11.329736   26685 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0226 03:38:11.329822   26685 kubeadm.go:322] [certs] Using the existing "sa" key
	I0226 03:38:11.329929   26685 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0226 03:38:11.376909   26685 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0226 03:38:11.443542   26685 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0226 03:38:11.806043   26685 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0226 03:38:11.923734   26685 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0226 03:38:11.924309   26685 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0226 03:38:11.945828   26685 out.go:204]   - Booting up control plane ...
	I0226 03:38:11.945971   26685 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0226 03:38:11.946116   26685 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0226 03:38:11.946215   26685 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0226 03:38:11.946361   26685 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0226 03:38:11.946663   26685 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0226 03:38:51.934107   26685 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0226 03:38:51.934916   26685 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:38:51.935145   26685 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:38:56.936634   26685 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:38:56.936805   26685 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:39:06.937773   26685 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:39:06.937956   26685 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:39:26.939275   26685 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:39:26.939439   26685 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:40:06.941333   26685 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:40:06.941536   26685 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:40:06.941547   26685 kubeadm.go:322] 
	I0226 03:40:06.941579   26685 kubeadm.go:322] Unfortunately, an error has occurred:
	I0226 03:40:06.941610   26685 kubeadm.go:322] 	timed out waiting for the condition
	I0226 03:40:06.941617   26685 kubeadm.go:322] 
	I0226 03:40:06.941646   26685 kubeadm.go:322] This error is likely caused by:
	I0226 03:40:06.941672   26685 kubeadm.go:322] 	- The kubelet is not running
	I0226 03:40:06.941760   26685 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0226 03:40:06.941767   26685 kubeadm.go:322] 
	I0226 03:40:06.941857   26685 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0226 03:40:06.941890   26685 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0226 03:40:06.941914   26685 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0226 03:40:06.941920   26685 kubeadm.go:322] 
	I0226 03:40:06.942000   26685 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0226 03:40:06.942086   26685 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0226 03:40:06.942155   26685 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0226 03:40:06.942194   26685 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0226 03:40:06.942255   26685 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0226 03:40:06.942296   26685 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0226 03:40:06.946133   26685 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0226 03:40:06.946208   26685 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0226 03:40:06.946370   26685 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0226 03:40:06.946513   26685 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0226 03:40:06.946614   26685 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0226 03:40:06.946720   26685 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0226 03:40:06.946734   26685 kubeadm.go:406] StartCluster complete in 8m7.132493816s
	I0226 03:40:06.946819   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:40:06.965123   26685 logs.go:276] 0 containers: []
	W0226 03:40:06.965138   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:40:06.965207   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:40:06.981681   26685 logs.go:276] 0 containers: []
	W0226 03:40:06.981697   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:40:06.981768   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:40:06.998836   26685 logs.go:276] 0 containers: []
	W0226 03:40:06.998852   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:40:06.998924   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:40:07.016987   26685 logs.go:276] 0 containers: []
	W0226 03:40:07.017001   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:40:07.017074   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:40:07.034740   26685 logs.go:276] 0 containers: []
	W0226 03:40:07.034755   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:40:07.034817   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:40:07.052062   26685 logs.go:276] 0 containers: []
	W0226 03:40:07.052077   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:40:07.052142   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:40:07.069652   26685 logs.go:276] 0 containers: []
	W0226 03:40:07.069667   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:40:07.069736   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:40:07.087059   26685 logs.go:276] 0 containers: []
	W0226 03:40:07.087074   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:40:07.087083   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:40:07.087090   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:40:07.106593   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:40:07.106608   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:40:07.178051   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:40:07.178064   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:40:07.178072   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:40:07.198843   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:40:07.198857   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:40:07.259135   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:40:07.259149   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0226 03:40:07.301212   26685 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0226 03:40:07.301236   26685 out.go:239] * 
	* 
	W0226 03:40:07.301294   26685 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0226 03:40:07.301315   26685 out.go:239] * 
	* 
	W0226 03:40:07.301923   26685 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0226 03:40:07.386578   26685 out.go:177] 
	W0226 03:40:07.428870   26685 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0226 03:40:07.428936   26685 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0226 03:40:07.428968   26685 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0226 03:40:07.450628   26685 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-326000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-326000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-326000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b",
	        "Created": "2024-02-26T11:25:41.957182514Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 393521,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-26T11:31:41.00994209Z",
	            "FinishedAt": "2024-02-26T11:31:38.07734926Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b/hostname",
	        "HostsPath": "/var/lib/docker/containers/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b/hosts",
	        "LogPath": "/var/lib/docker/containers/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b-json.log",
	        "Name": "/old-k8s-version-326000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-326000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-326000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/60f3f0a1425652d22ad9176e8be35f0bcc96402895ea787bafbfdf32dd7af953-init/diff:/var/lib/docker/overlay2/8bb839173c154892efba77c6399a35a6f861ea09086927d7a3ace9b08c2c0425/diff",
	                "MergedDir": "/var/lib/docker/overlay2/60f3f0a1425652d22ad9176e8be35f0bcc96402895ea787bafbfdf32dd7af953/merged",
	                "UpperDir": "/var/lib/docker/overlay2/60f3f0a1425652d22ad9176e8be35f0bcc96402895ea787bafbfdf32dd7af953/diff",
	                "WorkDir": "/var/lib/docker/overlay2/60f3f0a1425652d22ad9176e8be35f0bcc96402895ea787bafbfdf32dd7af953/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-326000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-326000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-326000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-326000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-326000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0cce2209868f05ead2592e14c5ef5a3aa02f6ae46d9e9259d358771c5c64dff0",
	            "SandboxKey": "/var/run/docker/netns/0cce2209868f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61949"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61950"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61946"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61947"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61948"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-326000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "76ad634e3f3f",
	                        "old-k8s-version-326000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "d3b7a706276d3d554def8a4b60d3b2d38626f3b90daf1316a28cf50ae9bb155f",
	                    "EndpointID": "899d13e21bc290a1d54cc28fa6907de86db58760bae6503215e004ce92f8f3f0",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-326000",
	                        "76ad634e3f3f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000: exit status 2 (403.666413ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-326000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-326000 logs -n 25: (1.470183126s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-722000 sudo                                  | calico-722000          | jenkins | v1.32.0 | 26 Feb 24 03:27 PST | 26 Feb 24 03:27 PST |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p calico-722000 sudo                                  | calico-722000          | jenkins | v1.32.0 | 26 Feb 24 03:27 PST |                     |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p calico-722000 sudo                                  | calico-722000          | jenkins | v1.32.0 | 26 Feb 24 03:27 PST | 26 Feb 24 03:27 PST |
	|         | systemctl cat crio --no-pager                          |                        |         |         |                     |                     |
	| ssh     | -p calico-722000 sudo find                             | calico-722000          | jenkins | v1.32.0 | 26 Feb 24 03:27 PST | 26 Feb 24 03:27 PST |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p calico-722000 sudo crio                             | calico-722000          | jenkins | v1.32.0 | 26 Feb 24 03:27 PST | 26 Feb 24 03:27 PST |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p calico-722000                                       | calico-722000          | jenkins | v1.32.0 | 26 Feb 24 03:27 PST | 26 Feb 24 03:27 PST |
	| start   | -p no-preload-136000                                   | no-preload-136000      | jenkins | v1.32.0 | 26 Feb 24 03:27 PST | 26 Feb 24 03:28 PST |
	|         | --memory=2200 --alsologtostderr                        |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-136000             | no-preload-136000      | jenkins | v1.32.0 | 26 Feb 24 03:28 PST | 26 Feb 24 03:28 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p no-preload-136000                                   | no-preload-136000      | jenkins | v1.32.0 | 26 Feb 24 03:28 PST | 26 Feb 24 03:28 PST |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-136000                  | no-preload-136000      | jenkins | v1.32.0 | 26 Feb 24 03:28 PST | 26 Feb 24 03:28 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-136000                                   | no-preload-136000      | jenkins | v1.32.0 | 26 Feb 24 03:28 PST | 26 Feb 24 03:34 PST |
	|         | --memory=2200 --alsologtostderr                        |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-326000        | old-k8s-version-326000 | jenkins | v1.32.0 | 26 Feb 24 03:29 PST |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-326000                              | old-k8s-version-326000 | jenkins | v1.32.0 | 26 Feb 24 03:31 PST | 26 Feb 24 03:31 PST |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-326000             | old-k8s-version-326000 | jenkins | v1.32.0 | 26 Feb 24 03:31 PST | 26 Feb 24 03:31 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-326000                              | old-k8s-version-326000 | jenkins | v1.32.0 | 26 Feb 24 03:31 PST |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                        |         |         |                     |                     |
	| image   | no-preload-136000 image list                           | no-preload-136000      | jenkins | v1.32.0 | 26 Feb 24 03:34 PST | 26 Feb 24 03:34 PST |
	|         | --format=json                                          |                        |         |         |                     |                     |
	| pause   | -p no-preload-136000                                   | no-preload-136000      | jenkins | v1.32.0 | 26 Feb 24 03:34 PST | 26 Feb 24 03:34 PST |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| unpause | -p no-preload-136000                                   | no-preload-136000      | jenkins | v1.32.0 | 26 Feb 24 03:34 PST | 26 Feb 24 03:34 PST |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| delete  | -p no-preload-136000                                   | no-preload-136000      | jenkins | v1.32.0 | 26 Feb 24 03:34 PST | 26 Feb 24 03:34 PST |
	| delete  | -p no-preload-136000                                   | no-preload-136000      | jenkins | v1.32.0 | 26 Feb 24 03:34 PST | 26 Feb 24 03:34 PST |
	| start   | -p embed-certs-624000                                  | embed-certs-624000     | jenkins | v1.32.0 | 26 Feb 24 03:34 PST | 26 Feb 24 03:35 PST |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-624000            | embed-certs-624000     | jenkins | v1.32.0 | 26 Feb 24 03:36 PST | 26 Feb 24 03:36 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p embed-certs-624000                                  | embed-certs-624000     | jenkins | v1.32.0 | 26 Feb 24 03:36 PST | 26 Feb 24 03:36 PST |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-624000                 | embed-certs-624000     | jenkins | v1.32.0 | 26 Feb 24 03:36 PST | 26 Feb 24 03:36 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p embed-certs-624000                                  | embed-certs-624000     | jenkins | v1.32.0 | 26 Feb 24 03:36 PST |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                        |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/26 03:36:13
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0226 03:36:13.092154   27170 out.go:291] Setting OutFile to fd 1 ...
	I0226 03:36:13.092405   27170 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 03:36:13.092411   27170 out.go:304] Setting ErrFile to fd 2...
	I0226 03:36:13.092414   27170 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 03:36:13.092603   27170 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18222-9538/.minikube/bin
	I0226 03:36:13.093910   27170 out.go:298] Setting JSON to false
	I0226 03:36:13.115997   27170 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":12944,"bootTime":1708934429,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0226 03:36:13.116086   27170 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0226 03:36:13.138649   27170 out.go:177] * [embed-certs-624000] minikube v1.32.0 on Darwin 14.3.1
	I0226 03:36:13.181147   27170 out.go:177]   - MINIKUBE_LOCATION=18222
	I0226 03:36:13.181195   27170 notify.go:220] Checking for updates...
	I0226 03:36:13.203101   27170 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18222-9538/kubeconfig
	I0226 03:36:13.223788   27170 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0226 03:36:13.244958   27170 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 03:36:13.266108   27170 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18222-9538/.minikube
	I0226 03:36:13.286958   27170 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0226 03:36:13.308739   27170 config.go:182] Loaded profile config "embed-certs-624000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0226 03:36:13.309622   27170 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 03:36:13.365804   27170 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0226 03:36:13.365976   27170 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 03:36:13.470506   27170 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:74 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-26 11:36:13.460626151 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0226 03:36:13.512139   27170 out.go:177] * Using the docker driver based on existing profile
	I0226 03:36:13.533367   27170 start.go:299] selected driver: docker
	I0226 03:36:13.533379   27170 start.go:903] validating driver "docker" against &{Name:embed-certs-624000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-624000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 03:36:13.533433   27170 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0226 03:36:13.536428   27170 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 03:36:13.634243   27170 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:74 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-26 11:36:13.624478272 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0226 03:36:13.634486   27170 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0226 03:36:13.634553   27170 cni.go:84] Creating CNI manager for ""
	I0226 03:36:13.634568   27170 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0226 03:36:13.634577   27170 start_flags.go:323] config:
	{Name:embed-certs-624000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-624000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 03:36:13.656360   27170 out.go:177] * Starting control plane node embed-certs-624000 in cluster embed-certs-624000
	I0226 03:36:13.676858   27170 cache.go:121] Beginning downloading kic base image for docker with docker
	I0226 03:36:13.718936   27170 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0226 03:36:13.761051   27170 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0226 03:36:13.761138   27170 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0226 03:36:13.761154   27170 cache.go:56] Caching tarball of preloaded images
	I0226 03:36:13.761121   27170 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0226 03:36:13.761372   27170 preload.go:174] Found /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0226 03:36:13.761394   27170 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0226 03:36:13.761526   27170 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/embed-certs-624000/config.json ...
	I0226 03:36:13.812861   27170 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0226 03:36:13.812892   27170 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0226 03:36:13.812914   27170 cache.go:194] Successfully downloaded all kic artifacts
	I0226 03:36:13.812977   27170 start.go:365] acquiring machines lock for embed-certs-624000: {Name:mkef297fccd6a282d8bbd66a3216786559ecd873 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0226 03:36:13.813074   27170 start.go:369] acquired machines lock for "embed-certs-624000" in 76.19µs
	I0226 03:36:13.813097   27170 start.go:96] Skipping create...Using existing machine configuration
	I0226 03:36:13.813108   27170 fix.go:54] fixHost starting: 
	I0226 03:36:13.813340   27170 cli_runner.go:164] Run: docker container inspect embed-certs-624000 --format={{.State.Status}}
	I0226 03:36:13.863702   27170 fix.go:102] recreateIfNeeded on embed-certs-624000: state=Stopped err=<nil>
	W0226 03:36:13.863740   27170 fix.go:128] unexpected machine state, will restart: <nil>
	I0226 03:36:13.885624   27170 out.go:177] * Restarting existing docker container for "embed-certs-624000" ...
	I0226 03:36:10.741749   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:36:10.760191   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:36:10.777109   26685 logs.go:276] 0 containers: []
	W0226 03:36:10.777126   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:36:10.777202   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:36:10.793727   26685 logs.go:276] 0 containers: []
	W0226 03:36:10.793742   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:36:10.793812   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:36:10.811359   26685 logs.go:276] 0 containers: []
	W0226 03:36:10.811375   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:36:10.811445   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:36:10.827423   26685 logs.go:276] 0 containers: []
	W0226 03:36:10.827445   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:36:10.827520   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:36:10.843460   26685 logs.go:276] 0 containers: []
	W0226 03:36:10.843474   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:36:10.843538   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:36:10.860398   26685 logs.go:276] 0 containers: []
	W0226 03:36:10.860412   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:36:10.860485   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:36:10.877202   26685 logs.go:276] 0 containers: []
	W0226 03:36:10.877217   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:36:10.877283   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:36:10.894157   26685 logs.go:276] 0 containers: []
	W0226 03:36:10.894176   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:36:10.894186   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:36:10.894195   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:36:10.912990   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:36:10.913006   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:36:11.016933   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:36:11.016945   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:36:11.016953   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:36:11.038215   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:36:11.038229   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:36:11.103125   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:36:11.103148   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 03:36:13.647766   26685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:36:13.664518   26685 kubeadm.go:640] restartCluster took 4m13.817771718s
	W0226 03:36:13.664573   26685 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0226 03:36:13.664599   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0226 03:36:14.090974   26685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 03:36:14.109305   26685 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0226 03:36:14.124871   26685 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0226 03:36:14.124930   26685 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0226 03:36:14.169080   26685 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0226 03:36:14.169109   26685 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0226 03:36:14.230640   26685 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0226 03:36:14.230862   26685 kubeadm.go:322] [preflight] Running pre-flight checks
	I0226 03:36:14.536024   26685 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0226 03:36:14.536125   26685 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0226 03:36:14.536219   26685 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0226 03:36:14.699643   26685 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0226 03:36:14.700324   26685 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0226 03:36:14.706877   26685 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0226 03:36:14.777843   26685 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0226 03:36:14.808241   26685 out.go:204]   - Generating certificates and keys ...
	I0226 03:36:14.808309   26685 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0226 03:36:14.808360   26685 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0226 03:36:14.808423   26685 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0226 03:36:14.808477   26685 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0226 03:36:14.808531   26685 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0226 03:36:14.808584   26685 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0226 03:36:14.808653   26685 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0226 03:36:14.808701   26685 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0226 03:36:14.808758   26685 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0226 03:36:14.808825   26685 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0226 03:36:14.808860   26685 kubeadm.go:322] [certs] Using the existing "sa" key
	I0226 03:36:14.808909   26685 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0226 03:36:14.840484   26685 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0226 03:36:14.951739   26685 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0226 03:36:15.003661   26685 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0226 03:36:15.283964   26685 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0226 03:36:15.284488   26685 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0226 03:36:13.928170   27170 cli_runner.go:164] Run: docker start embed-certs-624000
	I0226 03:36:14.182250   27170 cli_runner.go:164] Run: docker container inspect embed-certs-624000 --format={{.State.Status}}
	I0226 03:36:14.237605   27170 kic.go:430] container "embed-certs-624000" state is running.
	I0226 03:36:14.238219   27170 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-624000
	I0226 03:36:14.294680   27170 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/embed-certs-624000/config.json ...
	I0226 03:36:14.295128   27170 machine.go:88] provisioning docker machine ...
	I0226 03:36:14.295160   27170 ubuntu.go:169] provisioning hostname "embed-certs-624000"
	I0226 03:36:14.295259   27170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-624000
	I0226 03:36:14.355334   27170 main.go:141] libmachine: Using SSH client type: native
	I0226 03:36:14.355853   27170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6943920] 0x6946680 <nil>  [] 0s} 127.0.0.1 62086 <nil> <nil>}
	I0226 03:36:14.355881   27170 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-624000 && echo "embed-certs-624000" | sudo tee /etc/hostname
	I0226 03:36:14.357513   27170 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0226 03:36:17.518656   27170 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-624000
	
	I0226 03:36:17.518751   27170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-624000
	I0226 03:36:17.569672   27170 main.go:141] libmachine: Using SSH client type: native
	I0226 03:36:17.569846   27170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6943920] 0x6946680 <nil>  [] 0s} 127.0.0.1 62086 <nil> <nil>}
	I0226 03:36:17.569862   27170 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-624000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-624000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-624000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0226 03:36:17.706376   27170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0226 03:36:17.706401   27170 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/18222-9538/.minikube CaCertPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18222-9538/.minikube}
	I0226 03:36:17.706426   27170 ubuntu.go:177] setting up certificates
	I0226 03:36:17.706443   27170 provision.go:83] configureAuth start
	I0226 03:36:17.706524   27170 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-624000
	I0226 03:36:17.756434   27170 provision.go:138] copyHostCerts
	I0226 03:36:17.756533   27170 exec_runner.go:144] found /Users/jenkins/minikube-integration/18222-9538/.minikube/key.pem, removing ...
	I0226 03:36:17.756544   27170 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18222-9538/.minikube/key.pem
	I0226 03:36:17.756683   27170 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18222-9538/.minikube/key.pem (1675 bytes)
	I0226 03:36:17.756938   27170 exec_runner.go:144] found /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.pem, removing ...
	I0226 03:36:17.756944   27170 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.pem
	I0226 03:36:17.757009   27170 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.pem (1082 bytes)
	I0226 03:36:17.757182   27170 exec_runner.go:144] found /Users/jenkins/minikube-integration/18222-9538/.minikube/cert.pem, removing ...
	I0226 03:36:17.757188   27170 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18222-9538/.minikube/cert.pem
	I0226 03:36:17.757254   27170 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18222-9538/.minikube/cert.pem (1123 bytes)
	I0226 03:36:17.757418   27170 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca-key.pem org=jenkins.embed-certs-624000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-624000]
	I0226 03:36:17.947933   27170 provision.go:172] copyRemoteCerts
	I0226 03:36:17.948004   27170 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0226 03:36:17.948057   27170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-624000
	I0226 03:36:17.999491   27170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62086 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/embed-certs-624000/id_rsa Username:docker}
	I0226 03:36:15.327930   26685 out.go:204]   - Booting up control plane ...
	I0226 03:36:15.328100   26685 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0226 03:36:15.328226   26685 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0226 03:36:15.328344   26685 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0226 03:36:15.328500   26685 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0226 03:36:15.328754   26685 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0226 03:36:18.101937   27170 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0226 03:36:18.157833   27170 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0226 03:36:18.197844   27170 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0226 03:36:18.238127   27170 provision.go:86] duration metric: configureAuth took 531.661726ms
	I0226 03:36:18.238143   27170 ubuntu.go:193] setting minikube options for container-runtime
	I0226 03:36:18.238311   27170 config.go:182] Loaded profile config "embed-certs-624000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0226 03:36:18.238384   27170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-624000
	I0226 03:36:18.289382   27170 main.go:141] libmachine: Using SSH client type: native
	I0226 03:36:18.289574   27170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6943920] 0x6946680 <nil>  [] 0s} 127.0.0.1 62086 <nil> <nil>}
	I0226 03:36:18.289584   27170 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0226 03:36:18.428832   27170 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0226 03:36:18.428846   27170 ubuntu.go:71] root file system type: overlay
	I0226 03:36:18.428932   27170 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0226 03:36:18.429023   27170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-624000
	I0226 03:36:18.479848   27170 main.go:141] libmachine: Using SSH client type: native
	I0226 03:36:18.480032   27170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6943920] 0x6946680 <nil>  [] 0s} 127.0.0.1 62086 <nil> <nil>}
	I0226 03:36:18.480078   27170 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0226 03:36:18.639455   27170 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0226 03:36:18.639576   27170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-624000
	I0226 03:36:18.689619   27170 main.go:141] libmachine: Using SSH client type: native
	I0226 03:36:18.689804   27170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6943920] 0x6946680 <nil>  [] 0s} 127.0.0.1 62086 <nil> <nil>}
	I0226 03:36:18.689817   27170 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0226 03:36:18.841022   27170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0226 03:36:18.841040   27170 machine.go:91] provisioned docker machine in 4.545842885s
	I0226 03:36:18.841052   27170 start.go:300] post-start starting for "embed-certs-624000" (driver="docker")
	I0226 03:36:18.841061   27170 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0226 03:36:18.841139   27170 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0226 03:36:18.841207   27170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-624000
	I0226 03:36:18.894831   27170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62086 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/embed-certs-624000/id_rsa Username:docker}
	I0226 03:36:18.997838   27170 ssh_runner.go:195] Run: cat /etc/os-release
	I0226 03:36:19.002515   27170 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0226 03:36:19.002559   27170 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0226 03:36:19.002567   27170 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0226 03:36:19.002572   27170 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0226 03:36:19.002580   27170 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18222-9538/.minikube/addons for local assets ...
	I0226 03:36:19.002682   27170 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18222-9538/.minikube/files for local assets ...
	I0226 03:36:19.002815   27170 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem -> 100262.pem in /etc/ssl/certs
	I0226 03:36:19.002977   27170 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0226 03:36:19.017677   27170 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem --> /etc/ssl/certs/100262.pem (1708 bytes)
	I0226 03:36:19.057331   27170 start.go:303] post-start completed in 216.266178ms
	I0226 03:36:19.057411   27170 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0226 03:36:19.057466   27170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-624000
	I0226 03:36:19.108063   27170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62086 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/embed-certs-624000/id_rsa Username:docker}
	I0226 03:36:19.200716   27170 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0226 03:36:19.205739   27170 fix.go:56] fixHost completed within 5.392559306s
	I0226 03:36:19.205762   27170 start.go:83] releasing machines lock for "embed-certs-624000", held for 5.392607608s
	I0226 03:36:19.205876   27170 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-624000
	I0226 03:36:19.255757   27170 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0226 03:36:19.255757   27170 ssh_runner.go:195] Run: cat /version.json
	I0226 03:36:19.255854   27170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-624000
	I0226 03:36:19.255855   27170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-624000
	I0226 03:36:19.307880   27170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62086 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/embed-certs-624000/id_rsa Username:docker}
	I0226 03:36:19.307899   27170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62086 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/embed-certs-624000/id_rsa Username:docker}
	I0226 03:36:19.502969   27170 ssh_runner.go:195] Run: systemctl --version
	I0226 03:36:19.508635   27170 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0226 03:36:19.513714   27170 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0226 03:36:19.543794   27170 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0226 03:36:19.543869   27170 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0226 03:36:19.558857   27170 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0226 03:36:19.558883   27170 start.go:475] detecting cgroup driver to use...
	I0226 03:36:19.558895   27170 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0226 03:36:19.559014   27170 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0226 03:36:19.586693   27170 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0226 03:36:19.603427   27170 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0226 03:36:19.620760   27170 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0226 03:36:19.620830   27170 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0226 03:36:19.638230   27170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0226 03:36:19.654837   27170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0226 03:36:19.671376   27170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0226 03:36:19.687262   27170 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0226 03:36:19.703049   27170 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0226 03:36:19.718830   27170 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0226 03:36:19.734405   27170 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0226 03:36:19.748959   27170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 03:36:19.809748   27170 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0226 03:36:19.890215   27170 start.go:475] detecting cgroup driver to use...
	I0226 03:36:19.890236   27170 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0226 03:36:19.890311   27170 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0226 03:36:19.908162   27170 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0226 03:36:19.908233   27170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0226 03:36:19.927033   27170 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0226 03:36:19.957762   27170 ssh_runner.go:195] Run: which cri-dockerd
	I0226 03:36:19.968671   27170 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0226 03:36:19.986367   27170 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0226 03:36:20.019938   27170 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0226 03:36:20.120185   27170 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0226 03:36:20.204172   27170 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0226 03:36:20.204254   27170 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0226 03:36:20.234363   27170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 03:36:20.304475   27170 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0226 03:36:20.580923   27170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0226 03:36:20.597659   27170 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0226 03:36:20.615409   27170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0226 03:36:20.632315   27170 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0226 03:36:20.694500   27170 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0226 03:36:20.756607   27170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 03:36:20.818994   27170 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0226 03:36:20.854668   27170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0226 03:36:20.872414   27170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 03:36:20.933957   27170 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0226 03:36:21.034005   27170 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0226 03:36:21.034088   27170 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0226 03:36:21.038551   27170 start.go:543] Will wait 60s for crictl version
	I0226 03:36:21.038598   27170 ssh_runner.go:195] Run: which crictl
	I0226 03:36:21.042698   27170 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0226 03:36:21.093824   27170 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.3
	RuntimeApiVersion:  v1
	I0226 03:36:21.093913   27170 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0226 03:36:21.117258   27170 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0226 03:36:21.186840   27170 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.3 ...
	I0226 03:36:21.186998   27170 cli_runner.go:164] Run: docker exec -t embed-certs-624000 dig +short host.docker.internal
	I0226 03:36:21.286178   27170 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0226 03:36:21.286282   27170 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0226 03:36:21.290835   27170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 03:36:21.308202   27170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-624000
	I0226 03:36:21.358912   27170 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0226 03:36:21.359003   27170 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0226 03:36:21.376117   27170 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0226 03:36:21.376140   27170 docker.go:615] Images already preloaded, skipping extraction
	I0226 03:36:21.376233   27170 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0226 03:36:21.393472   27170 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0226 03:36:21.393490   27170 cache_images.go:84] Images are preloaded, skipping loading
	I0226 03:36:21.393579   27170 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0226 03:36:21.439657   27170 cni.go:84] Creating CNI manager for ""
	I0226 03:36:21.439675   27170 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0226 03:36:21.439693   27170 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0226 03:36:21.439707   27170 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-624000 NodeName:embed-certs-624000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0226 03:36:21.439817   27170 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "embed-certs-624000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0226 03:36:21.439893   27170 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=embed-certs-624000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-624000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0226 03:36:21.439956   27170 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0226 03:36:21.454891   27170 binaries.go:44] Found k8s binaries, skipping transfer
	I0226 03:36:21.454962   27170 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0226 03:36:21.469584   27170 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0226 03:36:21.498036   27170 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0226 03:36:21.526083   27170 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0226 03:36:21.555333   27170 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0226 03:36:21.559536   27170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 03:36:21.576527   27170 certs.go:56] Setting up /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/embed-certs-624000 for IP: 192.168.67.2
	I0226 03:36:21.576550   27170 certs.go:190] acquiring lock for shared ca certs: {Name:mkac1efdcc7c5f1039385f86b148562f7ea05475 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:36:21.576720   27170 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.key
	I0226 03:36:21.576766   27170 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18222-9538/.minikube/proxy-client-ca.key
	I0226 03:36:21.576865   27170 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/embed-certs-624000/client.key
	I0226 03:36:21.576930   27170 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/embed-certs-624000/apiserver.key.c7fa3a9e
	I0226 03:36:21.576978   27170 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/embed-certs-624000/proxy-client.key
	I0226 03:36:21.577183   27170 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/10026.pem (1338 bytes)
	W0226 03:36:21.577219   27170 certs.go:433] ignoring /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/10026_empty.pem, impossibly tiny 0 bytes
	I0226 03:36:21.577228   27170 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca-key.pem (1675 bytes)
	I0226 03:36:21.577260   27170 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem (1082 bytes)
	I0226 03:36:21.577292   27170 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/cert.pem (1123 bytes)
	I0226 03:36:21.577320   27170 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/key.pem (1675 bytes)
	I0226 03:36:21.577391   27170 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem (1708 bytes)
	I0226 03:36:21.577932   27170 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/embed-certs-624000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0226 03:36:21.617758   27170 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/embed-certs-624000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0226 03:36:21.657631   27170 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/embed-certs-624000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0226 03:36:21.697654   27170 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/embed-certs-624000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0226 03:36:21.737762   27170 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0226 03:36:21.778241   27170 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0226 03:36:21.818910   27170 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0226 03:36:21.859994   27170 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0226 03:36:21.903439   27170 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/10026.pem --> /usr/share/ca-certificates/10026.pem (1338 bytes)
	I0226 03:36:21.944052   27170 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem --> /usr/share/ca-certificates/100262.pem (1708 bytes)
	I0226 03:36:21.985448   27170 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0226 03:36:22.025878   27170 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0226 03:36:22.054139   27170 ssh_runner.go:195] Run: openssl version
	I0226 03:36:22.060291   27170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100262.pem && ln -fs /usr/share/ca-certificates/100262.pem /etc/ssl/certs/100262.pem"
	I0226 03:36:22.076259   27170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100262.pem
	I0226 03:36:22.080697   27170 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 26 10:36 /usr/share/ca-certificates/100262.pem
	I0226 03:36:22.080747   27170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100262.pem
	I0226 03:36:22.087250   27170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100262.pem /etc/ssl/certs/3ec20f2e.0"
	I0226 03:36:22.102185   27170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0226 03:36:22.117790   27170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0226 03:36:22.121847   27170 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 26 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0226 03:36:22.121903   27170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0226 03:36:22.128824   27170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0226 03:36:22.144210   27170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10026.pem && ln -fs /usr/share/ca-certificates/10026.pem /etc/ssl/certs/10026.pem"
	I0226 03:36:22.160462   27170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10026.pem
	I0226 03:36:22.164633   27170 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 26 10:36 /usr/share/ca-certificates/10026.pem
	I0226 03:36:22.164686   27170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10026.pem
	I0226 03:36:22.171173   27170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10026.pem /etc/ssl/certs/51391683.0"
	I0226 03:36:22.187601   27170 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0226 03:36:22.191723   27170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0226 03:36:22.197905   27170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0226 03:36:22.204284   27170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0226 03:36:22.211433   27170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0226 03:36:22.217965   27170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0226 03:36:22.224475   27170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0226 03:36:22.232018   27170 kubeadm.go:404] StartCluster: {Name:embed-certs-624000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-624000 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStri
ng:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 03:36:22.232144   27170 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0226 03:36:22.250615   27170 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0226 03:36:22.265627   27170 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0226 03:36:22.265646   27170 kubeadm.go:636] restartCluster start
	I0226 03:36:22.265700   27170 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0226 03:36:22.280267   27170 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:36:22.280353   27170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-624000
	I0226 03:36:22.331457   27170 kubeconfig.go:135] verify returned: extract IP: "embed-certs-624000" does not appear in /Users/jenkins/minikube-integration/18222-9538/kubeconfig
	I0226 03:36:22.331628   27170 kubeconfig.go:146] "embed-certs-624000" context is missing from /Users/jenkins/minikube-integration/18222-9538/kubeconfig - will repair!
	I0226 03:36:22.331993   27170 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/kubeconfig: {Name:mk55c402e0c5e83ba737512b9e22b403be7d3c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:36:22.333463   27170 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0226 03:36:22.348837   27170 api_server.go:166] Checking apiserver status ...
	I0226 03:36:22.348909   27170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:36:22.364627   27170 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:36:22.849146   27170 api_server.go:166] Checking apiserver status ...
	I0226 03:36:22.849306   27170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:36:22.867161   27170 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:36:23.349393   27170 api_server.go:166] Checking apiserver status ...
	I0226 03:36:23.349463   27170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:36:23.367016   27170 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:36:23.850885   27170 api_server.go:166] Checking apiserver status ...
	I0226 03:36:23.850997   27170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:36:23.869412   27170 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:36:24.349114   27170 api_server.go:166] Checking apiserver status ...
	I0226 03:36:24.349218   27170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:36:24.367813   27170 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:36:24.849772   27170 api_server.go:166] Checking apiserver status ...
	I0226 03:36:24.849837   27170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:36:24.866723   27170 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:36:25.350933   27170 api_server.go:166] Checking apiserver status ...
	I0226 03:36:25.351063   27170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:36:25.369155   27170 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:36:25.851009   27170 api_server.go:166] Checking apiserver status ...
	I0226 03:36:25.851154   27170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:36:25.869269   27170 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:36:26.348947   27170 api_server.go:166] Checking apiserver status ...
	I0226 03:36:26.349017   27170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:36:26.367023   27170 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:36:26.849209   27170 api_server.go:166] Checking apiserver status ...
	I0226 03:36:26.849347   27170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:36:26.868131   27170 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:36:27.350166   27170 api_server.go:166] Checking apiserver status ...
	I0226 03:36:27.350325   27170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:36:27.367980   27170 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:36:27.850614   27170 api_server.go:166] Checking apiserver status ...
	I0226 03:36:27.850763   27170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:36:27.869274   27170 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:36:28.350252   27170 api_server.go:166] Checking apiserver status ...
	I0226 03:36:28.350394   27170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:36:28.369037   27170 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:36:28.848977   27170 api_server.go:166] Checking apiserver status ...
	I0226 03:36:28.849072   27170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:36:28.867498   27170 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:36:29.349549   27170 api_server.go:166] Checking apiserver status ...
	I0226 03:36:29.349650   27170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:36:29.367916   27170 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:36:29.849057   27170 api_server.go:166] Checking apiserver status ...
	I0226 03:36:29.849116   27170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:36:29.865892   27170 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:36:30.349030   27170 api_server.go:166] Checking apiserver status ...
	I0226 03:36:30.349121   27170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:36:30.366853   27170 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:36:30.849431   27170 api_server.go:166] Checking apiserver status ...
	I0226 03:36:30.849566   27170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:36:30.866928   27170 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:36:31.349108   27170 api_server.go:166] Checking apiserver status ...
	I0226 03:36:31.349281   27170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:36:31.366783   27170 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:36:31.849141   27170 api_server.go:166] Checking apiserver status ...
	I0226 03:36:31.849210   27170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:36:31.866174   27170 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:36:32.349056   27170 api_server.go:166] Checking apiserver status ...
	I0226 03:36:32.349148   27170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:36:32.368022   27170 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:36:32.368037   27170 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0226 03:36:32.368059   27170 kubeadm.go:1135] stopping kube-system containers ...
	I0226 03:36:32.368127   27170 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0226 03:36:32.387470   27170 docker.go:483] Stopping containers: [d0aefa52639f b80c5dbeb949 3920872cf9d5 fe7198f8fccc 4d6aa41c607e 4c36f4691d3a f290820b6e03 3f6c6f770932 6c920e036a5e ccb2685bd819 bf034ddfd2e4 e1aaf42a78ed 5bc64abac309 63c6dc218631 6d22a9afdd31]
	I0226 03:36:32.387571   27170 ssh_runner.go:195] Run: docker stop d0aefa52639f b80c5dbeb949 3920872cf9d5 fe7198f8fccc 4d6aa41c607e 4c36f4691d3a f290820b6e03 3f6c6f770932 6c920e036a5e ccb2685bd819 bf034ddfd2e4 e1aaf42a78ed 5bc64abac309 63c6dc218631 6d22a9afdd31
	I0226 03:36:32.408298   27170 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0226 03:36:32.426643   27170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0226 03:36:32.441710   27170 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Feb 26 11:34 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Feb 26 11:34 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2011 Feb 26 11:34 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Feb 26 11:34 /etc/kubernetes/scheduler.conf
	
	I0226 03:36:32.441786   27170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0226 03:36:32.456834   27170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0226 03:36:32.471676   27170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0226 03:36:32.486859   27170 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:36:32.486917   27170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0226 03:36:32.501785   27170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0226 03:36:32.516406   27170 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:36:32.516463   27170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0226 03:36:32.530985   27170 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0226 03:36:32.545888   27170 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0226 03:36:32.545902   27170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0226 03:36:32.627820   27170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0226 03:36:33.272659   27170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0226 03:36:33.405253   27170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0226 03:36:33.462564   27170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0226 03:36:33.574944   27170 api_server.go:52] waiting for apiserver process to appear ...
	I0226 03:36:33.575066   27170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:36:34.075758   27170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:36:34.575143   27170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:36:34.593206   27170 api_server.go:72] duration metric: took 1.018254302s to wait for apiserver process to appear ...
	I0226 03:36:34.593221   27170 api_server.go:88] waiting for apiserver healthz status ...
	I0226 03:36:34.593237   27170 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62090/healthz ...
	I0226 03:36:37.325972   27170 api_server.go:279] https://127.0.0.1:62090/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0226 03:36:37.325990   27170 api_server.go:103] status: https://127.0.0.1:62090/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0226 03:36:37.326001   27170 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62090/healthz ...
	I0226 03:36:37.470104   27170 api_server.go:279] https://127.0.0.1:62090/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0226 03:36:37.470131   27170 api_server.go:103] status: https://127.0.0.1:62090/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0226 03:36:37.593627   27170 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62090/healthz ...
	I0226 03:36:37.672365   27170 api_server.go:279] https://127.0.0.1:62090/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 03:36:37.672383   27170 api_server.go:103] status: https://127.0.0.1:62090/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 03:36:38.093516   27170 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62090/healthz ...
	I0226 03:36:38.174232   27170 api_server.go:279] https://127.0.0.1:62090/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 03:36:38.174268   27170 api_server.go:103] status: https://127.0.0.1:62090/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 03:36:38.593379   27170 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62090/healthz ...
	I0226 03:36:38.599600   27170 api_server.go:279] https://127.0.0.1:62090/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 03:36:38.599634   27170 api_server.go:103] status: https://127.0.0.1:62090/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 03:36:39.093516   27170 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62090/healthz ...
	I0226 03:36:39.099175   27170 api_server.go:279] https://127.0.0.1:62090/healthz returned 200:
	ok
	I0226 03:36:39.105683   27170 api_server.go:141] control plane version: v1.28.4
	I0226 03:36:39.105697   27170 api_server.go:131] duration metric: took 4.512410331s to wait for apiserver health ...
	I0226 03:36:39.105703   27170 cni.go:84] Creating CNI manager for ""
	I0226 03:36:39.105713   27170 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0226 03:36:39.127807   27170 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0226 03:36:39.148431   27170 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0226 03:36:39.165840   27170 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0226 03:36:39.194635   27170 system_pods.go:43] waiting for kube-system pods to appear ...
	I0226 03:36:39.203787   27170 system_pods.go:59] 8 kube-system pods found
	I0226 03:36:39.203810   27170 system_pods.go:61] "coredns-5dd5756b68-9lxzx" [b3d16627-9f7b-444e-a6a3-15acd7a415ad] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0226 03:36:39.203816   27170 system_pods.go:61] "etcd-embed-certs-624000" [be990661-b309-416a-9869-a515e37112b5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0226 03:36:39.203822   27170 system_pods.go:61] "kube-apiserver-embed-certs-624000" [4b6ab06a-2fe6-46b8-a236-6136222cb2f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0226 03:36:39.203827   27170 system_pods.go:61] "kube-controller-manager-embed-certs-624000" [490f0c11-f376-4975-977d-e3241fd4610b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0226 03:36:39.203833   27170 system_pods.go:61] "kube-proxy-9r2ms" [204a191b-3f42-4cea-8f67-fb8ef6f5391b] Running
	I0226 03:36:39.203838   27170 system_pods.go:61] "kube-scheduler-embed-certs-624000" [e86e19fc-71f4-482c-8816-ab8686dcdc05] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0226 03:36:39.203846   27170 system_pods.go:61] "metrics-server-57f55c9bc5-sqsgs" [cd61c3a7-580a-43b3-92bb-d5e685127393] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0226 03:36:39.203849   27170 system_pods.go:61] "storage-provisioner" [6eb2cf9c-bd8b-4c7d-ab1e-0de32a66d0e6] Running
	I0226 03:36:39.203854   27170 system_pods.go:74] duration metric: took 9.207526ms to wait for pod list to return data ...
	I0226 03:36:39.203860   27170 node_conditions.go:102] verifying NodePressure condition ...
	I0226 03:36:39.207200   27170 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0226 03:36:39.207215   27170 node_conditions.go:123] node cpu capacity is 12
	I0226 03:36:39.207222   27170 node_conditions.go:105] duration metric: took 3.359347ms to run NodePressure ...
	I0226 03:36:39.207233   27170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0226 03:36:39.353904   27170 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0226 03:36:39.358787   27170 kubeadm.go:787] kubelet initialised
	I0226 03:36:39.358799   27170 kubeadm.go:788] duration metric: took 4.880286ms waiting for restarted kubelet to initialise ...
	I0226 03:36:39.358806   27170 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0226 03:36:39.364065   27170 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-9lxzx" in "kube-system" namespace to be "Ready" ...
	I0226 03:36:40.873627   27170 pod_ready.go:92] pod "coredns-5dd5756b68-9lxzx" in "kube-system" namespace has status "Ready":"True"
	I0226 03:36:40.873642   27170 pod_ready.go:81] duration metric: took 1.509543584s waiting for pod "coredns-5dd5756b68-9lxzx" in "kube-system" namespace to be "Ready" ...
	I0226 03:36:40.873649   27170 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-624000" in "kube-system" namespace to be "Ready" ...
	I0226 03:36:42.881281   27170 pod_ready.go:102] pod "etcd-embed-certs-624000" in "kube-system" namespace has status "Ready":"False"
	I0226 03:36:45.381763   27170 pod_ready.go:102] pod "etcd-embed-certs-624000" in "kube-system" namespace has status "Ready":"False"
	I0226 03:36:47.882530   27170 pod_ready.go:102] pod "etcd-embed-certs-624000" in "kube-system" namespace has status "Ready":"False"
	I0226 03:36:50.383683   27170 pod_ready.go:102] pod "etcd-embed-certs-624000" in "kube-system" namespace has status "Ready":"False"
	I0226 03:36:52.880883   27170 pod_ready.go:92] pod "etcd-embed-certs-624000" in "kube-system" namespace has status "Ready":"True"
	I0226 03:36:52.880902   27170 pod_ready.go:81] duration metric: took 12.007087038s waiting for pod "etcd-embed-certs-624000" in "kube-system" namespace to be "Ready" ...
	I0226 03:36:52.880919   27170 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-624000" in "kube-system" namespace to be "Ready" ...
	I0226 03:36:52.885979   27170 pod_ready.go:92] pod "kube-apiserver-embed-certs-624000" in "kube-system" namespace has status "Ready":"True"
	I0226 03:36:52.885989   27170 pod_ready.go:81] duration metric: took 5.060804ms waiting for pod "kube-apiserver-embed-certs-624000" in "kube-system" namespace to be "Ready" ...
	I0226 03:36:52.885996   27170 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-624000" in "kube-system" namespace to be "Ready" ...
	I0226 03:36:52.890755   27170 pod_ready.go:92] pod "kube-controller-manager-embed-certs-624000" in "kube-system" namespace has status "Ready":"True"
	I0226 03:36:52.890766   27170 pod_ready.go:81] duration metric: took 4.764572ms waiting for pod "kube-controller-manager-embed-certs-624000" in "kube-system" namespace to be "Ready" ...
	I0226 03:36:52.890772   27170 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9r2ms" in "kube-system" namespace to be "Ready" ...
	I0226 03:36:52.895409   27170 pod_ready.go:92] pod "kube-proxy-9r2ms" in "kube-system" namespace has status "Ready":"True"
	I0226 03:36:52.895420   27170 pod_ready.go:81] duration metric: took 4.642848ms waiting for pod "kube-proxy-9r2ms" in "kube-system" namespace to be "Ready" ...
	I0226 03:36:52.895426   27170 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-624000" in "kube-system" namespace to be "Ready" ...
	I0226 03:36:52.900151   27170 pod_ready.go:92] pod "kube-scheduler-embed-certs-624000" in "kube-system" namespace has status "Ready":"True"
	I0226 03:36:52.900161   27170 pod_ready.go:81] duration metric: took 4.730798ms waiting for pod "kube-scheduler-embed-certs-624000" in "kube-system" namespace to be "Ready" ...
	I0226 03:36:52.900167   27170 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace to be "Ready" ...
	I0226 03:36:54.906373   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:36:57.408102   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:36:55.294046   26685 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0226 03:36:55.296699   26685 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:36:55.296880   26685 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:36:59.906171   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:37:01.907669   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:37:00.299625   26685 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:37:00.299851   26685 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:37:04.408368   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:37:06.906806   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:37:09.408416   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:37:11.907539   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:37:10.301527   26685 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:37:10.301755   26685 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:37:13.907607   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:37:15.908377   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:37:18.407422   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:37:20.907257   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:37:23.408674   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:37:25.409191   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:37:27.907728   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:37:29.908439   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:37:31.908826   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:37:30.303743   26685 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:37:30.303901   26685 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:37:34.410265   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:37:36.907260   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:37:39.410046   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:37:41.908172   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:37:44.409548   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:37:46.907752   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:37:49.407977   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:37:51.409148   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:37:53.907553   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:37:55.907701   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:37:57.908916   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:38:00.408209   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:38:02.907462   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:38:04.908797   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:38:07.407892   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:38:10.306593   26685 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:38:10.306876   26685 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:38:10.306889   26685 kubeadm.go:322] 
	I0226 03:38:10.306934   26685 kubeadm.go:322] Unfortunately, an error has occurred:
	I0226 03:38:10.306986   26685 kubeadm.go:322] 	timed out waiting for the condition
	I0226 03:38:10.306995   26685 kubeadm.go:322] 
	I0226 03:38:10.307031   26685 kubeadm.go:322] This error is likely caused by:
	I0226 03:38:10.307064   26685 kubeadm.go:322] 	- The kubelet is not running
	I0226 03:38:10.307169   26685 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0226 03:38:10.307177   26685 kubeadm.go:322] 
	I0226 03:38:10.307299   26685 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0226 03:38:10.307340   26685 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0226 03:38:10.307379   26685 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0226 03:38:10.307385   26685 kubeadm.go:322] 
	I0226 03:38:10.307500   26685 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0226 03:38:10.307603   26685 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0226 03:38:10.307685   26685 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0226 03:38:10.307743   26685 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0226 03:38:10.307832   26685 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0226 03:38:10.307861   26685 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0226 03:38:10.311660   26685 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0226 03:38:10.311734   26685 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0226 03:38:10.311872   26685 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0226 03:38:10.312033   26685 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0226 03:38:10.312155   26685 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0226 03:38:10.312222   26685 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0226 03:38:10.312280   26685 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0226 03:38:10.312319   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0226 03:38:10.730970   26685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 03:38:10.748229   26685 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0226 03:38:10.748289   26685 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0226 03:38:10.763194   26685 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0226 03:38:10.763217   26685 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0226 03:38:10.819600   26685 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0226 03:38:10.819653   26685 kubeadm.go:322] [preflight] Running pre-flight checks
	I0226 03:38:11.068302   26685 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0226 03:38:11.068398   26685 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0226 03:38:11.068473   26685 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0226 03:38:11.237646   26685 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0226 03:38:11.238480   26685 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0226 03:38:11.244961   26685 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0226 03:38:11.307467   26685 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0226 03:38:11.329098   26685 out.go:204]   - Generating certificates and keys ...
	I0226 03:38:11.329178   26685 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0226 03:38:11.329245   26685 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0226 03:38:11.329303   26685 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0226 03:38:11.329352   26685 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0226 03:38:11.329413   26685 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0226 03:38:11.329467   26685 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0226 03:38:11.329529   26685 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0226 03:38:11.329590   26685 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0226 03:38:11.329649   26685 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0226 03:38:11.329736   26685 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0226 03:38:11.329822   26685 kubeadm.go:322] [certs] Using the existing "sa" key
	I0226 03:38:11.329929   26685 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0226 03:38:11.376909   26685 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0226 03:38:11.443542   26685 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0226 03:38:11.806043   26685 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0226 03:38:11.923734   26685 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0226 03:38:11.924309   26685 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0226 03:38:09.408564   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:38:11.908372   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:38:11.945828   26685 out.go:204]   - Booting up control plane ...
	I0226 03:38:11.945971   26685 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0226 03:38:11.946116   26685 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0226 03:38:11.946215   26685 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0226 03:38:11.946361   26685 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0226 03:38:11.946663   26685 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0226 03:38:14.409016   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:38:16.908140   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:38:18.913871   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:38:21.408235   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:38:23.411465   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:38:25.907755   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:38:28.408151   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:38:30.408460   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:38:32.411667   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:38:34.908946   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:38:37.408603   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:38:39.908856   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:38:42.408903   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:38:44.909174   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:38:47.410196   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:38:49.910030   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:38:52.408842   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:38:51.934107   26685 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0226 03:38:51.934916   26685 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:38:51.935145   26685 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:38:54.908331   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:38:56.909094   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:38:56.936634   26685 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:38:56.936805   26685 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:38:58.909324   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:39:01.412201   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:39:03.907720   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:39:05.909114   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:39:06.937773   26685 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:39:06.937956   26685 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:39:08.409013   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:39:10.409819   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:39:12.908930   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:39:15.408967   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:39:17.908418   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:39:19.910323   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:39:22.407644   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:39:24.409865   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:39:26.908922   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:39:26.939275   26685 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:39:26.939439   26685 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:39:28.909702   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:39:31.408508   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:39:33.410320   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:39:35.413098   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:39:37.907839   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:39:39.908532   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:39:41.909400   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:39:43.909680   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:39:45.909776   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:39:47.909920   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:39:49.910178   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:39:51.910328   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:39:54.410670   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:39:56.908375   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:39:58.910067   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:40:01.410628   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:40:06.941333   26685 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0226 03:40:06.941536   26685 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0226 03:40:06.941547   26685 kubeadm.go:322] 
	I0226 03:40:06.941579   26685 kubeadm.go:322] Unfortunately, an error has occurred:
	I0226 03:40:06.941610   26685 kubeadm.go:322] 	timed out waiting for the condition
	I0226 03:40:06.941617   26685 kubeadm.go:322] 
	I0226 03:40:06.941646   26685 kubeadm.go:322] This error is likely caused by:
	I0226 03:40:06.941672   26685 kubeadm.go:322] 	- The kubelet is not running
	I0226 03:40:06.941760   26685 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0226 03:40:06.941767   26685 kubeadm.go:322] 
	I0226 03:40:06.941857   26685 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0226 03:40:06.941890   26685 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0226 03:40:06.941914   26685 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0226 03:40:06.941920   26685 kubeadm.go:322] 
	I0226 03:40:06.942000   26685 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0226 03:40:06.942086   26685 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0226 03:40:06.942155   26685 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0226 03:40:06.942194   26685 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0226 03:40:06.942255   26685 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0226 03:40:06.942296   26685 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0226 03:40:06.946133   26685 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0226 03:40:06.946208   26685 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0226 03:40:06.946370   26685 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0226 03:40:06.946513   26685 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0226 03:40:06.946614   26685 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0226 03:40:06.946720   26685 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0226 03:40:06.946734   26685 kubeadm.go:406] StartCluster complete in 8m7.132493816s
	I0226 03:40:06.946819   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0226 03:40:06.965123   26685 logs.go:276] 0 containers: []
	W0226 03:40:06.965138   26685 logs.go:278] No container was found matching "kube-apiserver"
	I0226 03:40:06.965207   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0226 03:40:06.981681   26685 logs.go:276] 0 containers: []
	W0226 03:40:06.981697   26685 logs.go:278] No container was found matching "etcd"
	I0226 03:40:06.981768   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0226 03:40:06.998836   26685 logs.go:276] 0 containers: []
	W0226 03:40:06.998852   26685 logs.go:278] No container was found matching "coredns"
	I0226 03:40:06.998924   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0226 03:40:07.016987   26685 logs.go:276] 0 containers: []
	W0226 03:40:07.017001   26685 logs.go:278] No container was found matching "kube-scheduler"
	I0226 03:40:07.017074   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0226 03:40:07.034740   26685 logs.go:276] 0 containers: []
	W0226 03:40:07.034755   26685 logs.go:278] No container was found matching "kube-proxy"
	I0226 03:40:07.034817   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0226 03:40:07.052062   26685 logs.go:276] 0 containers: []
	W0226 03:40:07.052077   26685 logs.go:278] No container was found matching "kube-controller-manager"
	I0226 03:40:07.052142   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0226 03:40:07.069652   26685 logs.go:276] 0 containers: []
	W0226 03:40:07.069667   26685 logs.go:278] No container was found matching "kindnet"
	I0226 03:40:07.069736   26685 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0226 03:40:07.087059   26685 logs.go:276] 0 containers: []
	W0226 03:40:07.087074   26685 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0226 03:40:07.087083   26685 logs.go:123] Gathering logs for dmesg ...
	I0226 03:40:07.087090   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 03:40:07.106593   26685 logs.go:123] Gathering logs for describe nodes ...
	I0226 03:40:07.106608   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0226 03:40:07.178051   26685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0226 03:40:07.178064   26685 logs.go:123] Gathering logs for Docker ...
	I0226 03:40:07.178072   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0226 03:40:07.198843   26685 logs.go:123] Gathering logs for container status ...
	I0226 03:40:07.198857   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 03:40:07.259135   26685 logs.go:123] Gathering logs for kubelet ...
	I0226 03:40:07.259149   26685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0226 03:40:07.301212   26685 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0226 03:40:07.301236   26685 out.go:239] * 
	W0226 03:40:07.301294   26685 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0226 03:40:07.301315   26685 out.go:239] * 
	W0226 03:40:07.301923   26685 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0226 03:40:07.386578   26685 out.go:177] 
	W0226 03:40:07.428870   26685 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0226 03:40:07.428936   26685 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0226 03:40:07.428968   26685 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0226 03:40:07.450628   26685 out.go:177] 
	I0226 03:40:03.910892   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	I0226 03:40:06.409895   27170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqsgs" in "kube-system" namespace has status "Ready":"False"
	
	
	==> Docker <==
	Feb 26 11:31:47 old-k8s-version-326000 dockerd[742]: time="2024-02-26T11:31:47.086544831Z" level=info msg="Loading containers: start."
	Feb 26 11:31:47 old-k8s-version-326000 dockerd[742]: time="2024-02-26T11:31:47.173459100Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 26 11:31:47 old-k8s-version-326000 dockerd[742]: time="2024-02-26T11:31:47.209710579Z" level=info msg="Loading containers: done."
	Feb 26 11:31:47 old-k8s-version-326000 dockerd[742]: time="2024-02-26T11:31:47.217348090Z" level=info msg="Docker daemon" commit=f417435 containerd-snapshotter=false storage-driver=overlay2 version=25.0.3
	Feb 26 11:31:47 old-k8s-version-326000 dockerd[742]: time="2024-02-26T11:31:47.217413387Z" level=info msg="Daemon has completed initialization"
	Feb 26 11:31:47 old-k8s-version-326000 dockerd[742]: time="2024-02-26T11:31:47.236046872Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 26 11:31:47 old-k8s-version-326000 dockerd[742]: time="2024-02-26T11:31:47.236201204Z" level=info msg="API listen on [::]:2376"
	Feb 26 11:31:47 old-k8s-version-326000 systemd[1]: Started Docker Application Container Engine.
	Feb 26 11:31:55 old-k8s-version-326000 systemd[1]: Stopping Docker Application Container Engine...
	Feb 26 11:31:55 old-k8s-version-326000 dockerd[742]: time="2024-02-26T11:31:55.417082278Z" level=info msg="Processing signal 'terminated'"
	Feb 26 11:31:55 old-k8s-version-326000 dockerd[742]: time="2024-02-26T11:31:55.417994569Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 26 11:31:55 old-k8s-version-326000 dockerd[742]: time="2024-02-26T11:31:55.418234047Z" level=info msg="Daemon shutdown complete"
	Feb 26 11:31:55 old-k8s-version-326000 systemd[1]: docker.service: Deactivated successfully.
	Feb 26 11:31:55 old-k8s-version-326000 systemd[1]: Stopped Docker Application Container Engine.
	Feb 26 11:31:55 old-k8s-version-326000 systemd[1]: Starting Docker Application Container Engine...
	Feb 26 11:31:55 old-k8s-version-326000 dockerd[980]: time="2024-02-26T11:31:55.472955974Z" level=info msg="Starting up"
	Feb 26 11:31:55 old-k8s-version-326000 dockerd[980]: time="2024-02-26T11:31:55.716756590Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 26 11:31:55 old-k8s-version-326000 dockerd[980]: time="2024-02-26T11:31:55.814690642Z" level=info msg="Loading containers: start."
	Feb 26 11:31:55 old-k8s-version-326000 dockerd[980]: time="2024-02-26T11:31:55.933857505Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 26 11:31:55 old-k8s-version-326000 dockerd[980]: time="2024-02-26T11:31:55.971477614Z" level=info msg="Loading containers: done."
	Feb 26 11:31:55 old-k8s-version-326000 dockerd[980]: time="2024-02-26T11:31:55.979740207Z" level=info msg="Docker daemon" commit=f417435 containerd-snapshotter=false storage-driver=overlay2 version=25.0.3
	Feb 26 11:31:55 old-k8s-version-326000 dockerd[980]: time="2024-02-26T11:31:55.979804644Z" level=info msg="Daemon has completed initialization"
	Feb 26 11:31:56 old-k8s-version-326000 dockerd[980]: time="2024-02-26T11:31:56.003600672Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 26 11:31:56 old-k8s-version-326000 dockerd[980]: time="2024-02-26T11:31:56.003787673Z" level=info msg="API listen on [::]:2376"
	Feb 26 11:31:56 old-k8s-version-326000 systemd[1]: Started Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2024-02-26T11:40:08Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 11:40:09 up  1:12,  0 users,  load average: 4.91, 4.87, 4.96
	Linux old-k8s-version-326000 6.6.12-linuxkit #1 SMP PREEMPT_DYNAMIC Tue Jan 30 09:48:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kubelet <==
	Feb 26 11:40:07 old-k8s-version-326000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 26 11:40:08 old-k8s-version-326000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 151.
	Feb 26 11:40:08 old-k8s-version-326000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 26 11:40:08 old-k8s-version-326000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 26 11:40:08 old-k8s-version-326000 kubelet[19566]: I0226 11:40:08.453347   19566 server.go:410] Version: v1.16.0
	Feb 26 11:40:08 old-k8s-version-326000 kubelet[19566]: I0226 11:40:08.453555   19566 plugins.go:100] No cloud provider specified.
	Feb 26 11:40:08 old-k8s-version-326000 kubelet[19566]: I0226 11:40:08.453566   19566 server.go:773] Client rotation is on, will bootstrap in background
	Feb 26 11:40:08 old-k8s-version-326000 kubelet[19566]: I0226 11:40:08.455226   19566 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 26 11:40:08 old-k8s-version-326000 kubelet[19566]: W0226 11:40:08.456034   19566 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 26 11:40:08 old-k8s-version-326000 kubelet[19566]: W0226 11:40:08.456105   19566 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 26 11:40:08 old-k8s-version-326000 kubelet[19566]: F0226 11:40:08.456161   19566 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 26 11:40:08 old-k8s-version-326000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 26 11:40:08 old-k8s-version-326000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 26 11:40:09 old-k8s-version-326000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 152.
	Feb 26 11:40:09 old-k8s-version-326000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 26 11:40:09 old-k8s-version-326000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 26 11:40:09 old-k8s-version-326000 kubelet[19672]: I0226 11:40:09.178255   19672 server.go:410] Version: v1.16.0
	Feb 26 11:40:09 old-k8s-version-326000 kubelet[19672]: I0226 11:40:09.178470   19672 plugins.go:100] No cloud provider specified.
	Feb 26 11:40:09 old-k8s-version-326000 kubelet[19672]: I0226 11:40:09.178479   19672 server.go:773] Client rotation is on, will bootstrap in background
	Feb 26 11:40:09 old-k8s-version-326000 kubelet[19672]: I0226 11:40:09.190654   19672 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 26 11:40:09 old-k8s-version-326000 kubelet[19672]: W0226 11:40:09.191299   19672 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 26 11:40:09 old-k8s-version-326000 kubelet[19672]: W0226 11:40:09.191361   19672 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 26 11:40:09 old-k8s-version-326000 kubelet[19672]: F0226 11:40:09.191384   19672 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 26 11:40:09 old-k8s-version-326000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 26 11:40:09 old-k8s-version-326000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-326000 -n old-k8s-version-326000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-326000 -n old-k8s-version-326000: exit status 2 (404.60061ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-326000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (510.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:40:42.282642   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/client.crt: no such file or directory
E0226 03:40:46.929388   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/flannel-722000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:41:06.827317   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kindnet-722000/client.crt: no such file or directory
E0226 03:41:07.585860   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/enable-default-cni-722000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:41:12.466523   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/no-preload-136000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:41:23.952391   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/calico-722000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:42:09.977713   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/flannel-722000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:42:11.880924   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/bridge-722000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:42:30.649544   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/enable-default-cni-722000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:43:01.539187   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubenet-722000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:43:28.617421   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/no-preload-136000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:43:32.573436   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 03:43:34.929671   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/bridge-722000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:43:40.792919   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/custom-flannel-722000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:43:56.309242   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/no-preload-136000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:44:19.239165   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:44:23.025999   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
E0226 03:44:24.585365   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubenet-722000/client.crt: no such file or directory
E0226 03:44:28.502183   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/skaffold-341000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:44:43.786168   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kindnet-722000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:44:59.580357   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/false-722000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:45:03.851233   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/custom-flannel-722000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:45:46.932529   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/flannel-722000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:46:07.590276   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/enable-default-cni-722000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:46:22.630480   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/false-722000/client.crt: no such file or directory
E0226 03:46:23.956914   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/calico-722000/client.crt: no such file or directory
E0226 03:46:25.459410   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/skaffold-341000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:47:11.882352   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/bridge-722000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:47:46.994725   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/calico-722000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:47:59.967806   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:48:01.534878   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubenet-722000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:48:32.566973   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 03:48:40.786542   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/custom-flannel-722000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-326000 -n old-k8s-version-326000
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-326000 -n old-k8s-version-326000: exit status 2 (405.781706ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-326000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-326000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-326000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b",
	        "Created": "2024-02-26T11:25:41.957182514Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 393521,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-26T11:31:41.00994209Z",
	            "FinishedAt": "2024-02-26T11:31:38.07734926Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b/hostname",
	        "HostsPath": "/var/lib/docker/containers/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b/hosts",
	        "LogPath": "/var/lib/docker/containers/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b-json.log",
	        "Name": "/old-k8s-version-326000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-326000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-326000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/60f3f0a1425652d22ad9176e8be35f0bcc96402895ea787bafbfdf32dd7af953-init/diff:/var/lib/docker/overlay2/8bb839173c154892efba77c6399a35a6f861ea09086927d7a3ace9b08c2c0425/diff",
	                "MergedDir": "/var/lib/docker/overlay2/60f3f0a1425652d22ad9176e8be35f0bcc96402895ea787bafbfdf32dd7af953/merged",
	                "UpperDir": "/var/lib/docker/overlay2/60f3f0a1425652d22ad9176e8be35f0bcc96402895ea787bafbfdf32dd7af953/diff",
	                "WorkDir": "/var/lib/docker/overlay2/60f3f0a1425652d22ad9176e8be35f0bcc96402895ea787bafbfdf32dd7af953/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-326000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-326000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-326000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-326000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-326000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0cce2209868f05ead2592e14c5ef5a3aa02f6ae46d9e9259d358771c5c64dff0",
	            "SandboxKey": "/var/run/docker/netns/0cce2209868f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61949"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61950"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61946"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61947"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61948"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-326000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "76ad634e3f3f",
	                        "old-k8s-version-326000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "d3b7a706276d3d554def8a4b60d3b2d38626f3b90daf1316a28cf50ae9bb155f",
	                    "EndpointID": "899d13e21bc290a1d54cc28fa6907de86db58760bae6503215e004ce92f8f3f0",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-326000",
	                        "76ad634e3f3f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000: exit status 2 (402.323376ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-326000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-326000 logs -n 25: (1.378501226s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p embed-certs-624000            | embed-certs-624000           | jenkins | v1.32.0 | 26 Feb 24 03:36 PST | 26 Feb 24 03:36 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-624000                                  | embed-certs-624000           | jenkins | v1.32.0 | 26 Feb 24 03:36 PST | 26 Feb 24 03:36 PST |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-624000                 | embed-certs-624000           | jenkins | v1.32.0 | 26 Feb 24 03:36 PST | 26 Feb 24 03:36 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-624000                                  | embed-certs-624000           | jenkins | v1.32.0 | 26 Feb 24 03:36 PST | 26 Feb 24 03:41 PST |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| image   | embed-certs-624000 image list                          | embed-certs-624000           | jenkins | v1.32.0 | 26 Feb 24 03:41 PST | 26 Feb 24 03:41 PST |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-624000                                  | embed-certs-624000           | jenkins | v1.32.0 | 26 Feb 24 03:41 PST | 26 Feb 24 03:41 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-624000                                  | embed-certs-624000           | jenkins | v1.32.0 | 26 Feb 24 03:41 PST | 26 Feb 24 03:41 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-624000                                  | embed-certs-624000           | jenkins | v1.32.0 | 26 Feb 24 03:41 PST | 26 Feb 24 03:41 PST |
	| delete  | -p embed-certs-624000                                  | embed-certs-624000           | jenkins | v1.32.0 | 26 Feb 24 03:41 PST | 26 Feb 24 03:41 PST |
	| delete  | -p                                                     | disable-driver-mounts-553000 | jenkins | v1.32.0 | 26 Feb 24 03:41 PST | 26 Feb 24 03:41 PST |
	|         | disable-driver-mounts-553000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-145000 | jenkins | v1.32.0 | 26 Feb 24 03:41 PST | 26 Feb 24 03:42 PST |
	|         | default-k8s-diff-port-145000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-145000  | default-k8s-diff-port-145000 | jenkins | v1.32.0 | 26 Feb 24 03:42 PST | 26 Feb 24 03:42 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-145000 | jenkins | v1.32.0 | 26 Feb 24 03:42 PST | 26 Feb 24 03:42 PST |
	|         | default-k8s-diff-port-145000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-145000       | default-k8s-diff-port-145000 | jenkins | v1.32.0 | 26 Feb 24 03:42 PST | 26 Feb 24 03:42 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-145000 | jenkins | v1.32.0 | 26 Feb 24 03:42 PST | 26 Feb 24 03:48 PST |
	|         | default-k8s-diff-port-145000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-145000                           | default-k8s-diff-port-145000 | jenkins | v1.32.0 | 26 Feb 24 03:48 PST | 26 Feb 24 03:48 PST |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-145000 | jenkins | v1.32.0 | 26 Feb 24 03:48 PST | 26 Feb 24 03:48 PST |
	|         | default-k8s-diff-port-145000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-145000 | jenkins | v1.32.0 | 26 Feb 24 03:48 PST | 26 Feb 24 03:48 PST |
	|         | default-k8s-diff-port-145000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-145000 | jenkins | v1.32.0 | 26 Feb 24 03:48 PST | 26 Feb 24 03:48 PST |
	|         | default-k8s-diff-port-145000                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-145000 | jenkins | v1.32.0 | 26 Feb 24 03:48 PST | 26 Feb 24 03:48 PST |
	|         | default-k8s-diff-port-145000                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-340000 --memory=2200 --alsologtostderr   | newest-cni-340000            | jenkins | v1.32.0 | 26 Feb 24 03:48 PST | 26 Feb 24 03:49 PST |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.29.0-rc.2     |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-340000             | newest-cni-340000            | jenkins | v1.32.0 | 26 Feb 24 03:49 PST | 26 Feb 24 03:49 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-340000                                   | newest-cni-340000            | jenkins | v1.32.0 | 26 Feb 24 03:49 PST | 26 Feb 24 03:49 PST |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-340000                  | newest-cni-340000            | jenkins | v1.32.0 | 26 Feb 24 03:49 PST | 26 Feb 24 03:49 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-340000 --memory=2200 --alsologtostderr   | newest-cni-340000            | jenkins | v1.32.0 | 26 Feb 24 03:49 PST |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.29.0-rc.2     |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/26 03:49:09
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0226 03:49:09.053919   28149 out.go:291] Setting OutFile to fd 1 ...
	I0226 03:49:09.054180   28149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 03:49:09.054185   28149 out.go:304] Setting ErrFile to fd 2...
	I0226 03:49:09.054189   28149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 03:49:09.054371   28149 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18222-9538/.minikube/bin
	I0226 03:49:09.055812   28149 out.go:298] Setting JSON to false
	I0226 03:49:09.077723   28149 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":13720,"bootTime":1708934429,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0226 03:49:09.077817   28149 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0226 03:49:09.100429   28149 out.go:177] * [newest-cni-340000] minikube v1.32.0 on Darwin 14.3.1
	I0226 03:49:09.143096   28149 out.go:177]   - MINIKUBE_LOCATION=18222
	I0226 03:49:09.143135   28149 notify.go:220] Checking for updates...
	I0226 03:49:09.186003   28149 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18222-9538/kubeconfig
	I0226 03:49:09.207916   28149 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0226 03:49:09.229119   28149 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 03:49:09.271648   28149 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18222-9538/.minikube
	I0226 03:49:09.293191   28149 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0226 03:49:09.315295   28149 config.go:182] Loaded profile config "newest-cni-340000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0226 03:49:09.315695   28149 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 03:49:09.371030   28149 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0226 03:49:09.371181   28149 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 03:49:09.470038   28149 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-26 11:49:09.45939756 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=
cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker De
v Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) f
or an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0226 03:49:09.512601   28149 out.go:177] * Using the docker driver based on existing profile
	I0226 03:49:09.533667   28149 start.go:299] selected driver: docker
	I0226 03:49:09.533694   28149 start.go:903] validating driver "docker" against &{Name:newest-cni-340000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-340000 Namespace:default APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Liste
nAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 03:49:09.533824   28149 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0226 03:49:09.538224   28149 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 03:49:09.646702   28149 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-26 11:49:09.635205581 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0226 03:49:09.646926   28149 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0226 03:49:09.646982   28149 cni.go:84] Creating CNI manager for ""
	I0226 03:49:09.646997   28149 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0226 03:49:09.647006   28149 start_flags.go:323] config:
	{Name:newest-cni-340000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-340000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 03:49:09.689683   28149 out.go:177] * Starting control plane node newest-cni-340000 in cluster newest-cni-340000
	I0226 03:49:09.710846   28149 cache.go:121] Beginning downloading kic base image for docker with docker
	I0226 03:49:09.732640   28149 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0226 03:49:09.774755   28149 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0226 03:49:09.774799   28149 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0226 03:49:09.774806   28149 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0226 03:49:09.774815   28149 cache.go:56] Caching tarball of preloaded images
	I0226 03:49:09.774934   28149 preload.go:174] Found /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0226 03:49:09.774944   28149 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0226 03:49:09.775034   28149 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/newest-cni-340000/config.json ...
	I0226 03:49:09.825086   28149 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0226 03:49:09.825114   28149 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0226 03:49:09.825134   28149 cache.go:194] Successfully downloaded all kic artifacts
	I0226 03:49:09.825178   28149 start.go:365] acquiring machines lock for newest-cni-340000: {Name:mk9762481b056719b25a9fd40adb8839220055a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0226 03:49:09.825268   28149 start.go:369] acquired machines lock for "newest-cni-340000" in 71.662µs
	I0226 03:49:09.825291   28149 start.go:96] Skipping create...Using existing machine configuration
	I0226 03:49:09.825300   28149 fix.go:54] fixHost starting: 
	I0226 03:49:09.825537   28149 cli_runner.go:164] Run: docker container inspect newest-cni-340000 --format={{.State.Status}}
	I0226 03:49:09.875513   28149 fix.go:102] recreateIfNeeded on newest-cni-340000: state=Stopped err=<nil>
	W0226 03:49:09.875564   28149 fix.go:128] unexpected machine state, will restart: <nil>
	I0226 03:49:09.897313   28149 out.go:177] * Restarting existing docker container for "newest-cni-340000" ...
	
	
	==> Docker <==
	Feb 26 11:31:47 old-k8s-version-326000 dockerd[742]: time="2024-02-26T11:31:47.086544831Z" level=info msg="Loading containers: start."
	Feb 26 11:31:47 old-k8s-version-326000 dockerd[742]: time="2024-02-26T11:31:47.173459100Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 26 11:31:47 old-k8s-version-326000 dockerd[742]: time="2024-02-26T11:31:47.209710579Z" level=info msg="Loading containers: done."
	Feb 26 11:31:47 old-k8s-version-326000 dockerd[742]: time="2024-02-26T11:31:47.217348090Z" level=info msg="Docker daemon" commit=f417435 containerd-snapshotter=false storage-driver=overlay2 version=25.0.3
	Feb 26 11:31:47 old-k8s-version-326000 dockerd[742]: time="2024-02-26T11:31:47.217413387Z" level=info msg="Daemon has completed initialization"
	Feb 26 11:31:47 old-k8s-version-326000 dockerd[742]: time="2024-02-26T11:31:47.236046872Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 26 11:31:47 old-k8s-version-326000 dockerd[742]: time="2024-02-26T11:31:47.236201204Z" level=info msg="API listen on [::]:2376"
	Feb 26 11:31:47 old-k8s-version-326000 systemd[1]: Started Docker Application Container Engine.
	Feb 26 11:31:55 old-k8s-version-326000 systemd[1]: Stopping Docker Application Container Engine...
	Feb 26 11:31:55 old-k8s-version-326000 dockerd[742]: time="2024-02-26T11:31:55.417082278Z" level=info msg="Processing signal 'terminated'"
	Feb 26 11:31:55 old-k8s-version-326000 dockerd[742]: time="2024-02-26T11:31:55.417994569Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 26 11:31:55 old-k8s-version-326000 dockerd[742]: time="2024-02-26T11:31:55.418234047Z" level=info msg="Daemon shutdown complete"
	Feb 26 11:31:55 old-k8s-version-326000 systemd[1]: docker.service: Deactivated successfully.
	Feb 26 11:31:55 old-k8s-version-326000 systemd[1]: Stopped Docker Application Container Engine.
	Feb 26 11:31:55 old-k8s-version-326000 systemd[1]: Starting Docker Application Container Engine...
	Feb 26 11:31:55 old-k8s-version-326000 dockerd[980]: time="2024-02-26T11:31:55.472955974Z" level=info msg="Starting up"
	Feb 26 11:31:55 old-k8s-version-326000 dockerd[980]: time="2024-02-26T11:31:55.716756590Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 26 11:31:55 old-k8s-version-326000 dockerd[980]: time="2024-02-26T11:31:55.814690642Z" level=info msg="Loading containers: start."
	Feb 26 11:31:55 old-k8s-version-326000 dockerd[980]: time="2024-02-26T11:31:55.933857505Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 26 11:31:55 old-k8s-version-326000 dockerd[980]: time="2024-02-26T11:31:55.971477614Z" level=info msg="Loading containers: done."
	Feb 26 11:31:55 old-k8s-version-326000 dockerd[980]: time="2024-02-26T11:31:55.979740207Z" level=info msg="Docker daemon" commit=f417435 containerd-snapshotter=false storage-driver=overlay2 version=25.0.3
	Feb 26 11:31:55 old-k8s-version-326000 dockerd[980]: time="2024-02-26T11:31:55.979804644Z" level=info msg="Daemon has completed initialization"
	Feb 26 11:31:56 old-k8s-version-326000 dockerd[980]: time="2024-02-26T11:31:56.003600672Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 26 11:31:56 old-k8s-version-326000 dockerd[980]: time="2024-02-26T11:31:56.003787673Z" level=info msg="API listen on [::]:2376"
	Feb 26 11:31:56 old-k8s-version-326000 systemd[1]: Started Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2024-02-26T11:49:12Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 11:49:12 up  1:21,  0 users,  load average: 5.44, 5.27, 5.05
	Linux old-k8s-version-326000 6.6.12-linuxkit #1 SMP PREEMPT_DYNAMIC Tue Jan 30 09:48:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kubelet <==
	Feb 26 11:49:11 old-k8s-version-326000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 26 11:49:11 old-k8s-version-326000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 849.
	Feb 26 11:49:11 old-k8s-version-326000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 26 11:49:11 old-k8s-version-326000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 26 11:49:11 old-k8s-version-326000 kubelet[31566]: I0226 11:49:11.916865   31566 server.go:410] Version: v1.16.0
	Feb 26 11:49:11 old-k8s-version-326000 kubelet[31566]: I0226 11:49:11.917173   31566 plugins.go:100] No cloud provider specified.
	Feb 26 11:49:11 old-k8s-version-326000 kubelet[31566]: I0226 11:49:11.917184   31566 server.go:773] Client rotation is on, will bootstrap in background
	Feb 26 11:49:11 old-k8s-version-326000 kubelet[31566]: I0226 11:49:11.919104   31566 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 26 11:49:11 old-k8s-version-326000 kubelet[31566]: W0226 11:49:11.919979   31566 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 26 11:49:11 old-k8s-version-326000 kubelet[31566]: W0226 11:49:11.920055   31566 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 26 11:49:11 old-k8s-version-326000 kubelet[31566]: F0226 11:49:11.920085   31566 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 26 11:49:11 old-k8s-version-326000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 26 11:49:11 old-k8s-version-326000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 26 11:49:12 old-k8s-version-326000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 850.
	Feb 26 11:49:12 old-k8s-version-326000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 26 11:49:12 old-k8s-version-326000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 26 11:49:12 old-k8s-version-326000 kubelet[31689]: I0226 11:49:12.668816   31689 server.go:410] Version: v1.16.0
	Feb 26 11:49:12 old-k8s-version-326000 kubelet[31689]: I0226 11:49:12.669096   31689 plugins.go:100] No cloud provider specified.
	Feb 26 11:49:12 old-k8s-version-326000 kubelet[31689]: I0226 11:49:12.669106   31689 server.go:773] Client rotation is on, will bootstrap in background
	Feb 26 11:49:12 old-k8s-version-326000 kubelet[31689]: I0226 11:49:12.670740   31689 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 26 11:49:12 old-k8s-version-326000 kubelet[31689]: W0226 11:49:12.671561   31689 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 26 11:49:12 old-k8s-version-326000 kubelet[31689]: W0226 11:49:12.671632   31689 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 26 11:49:12 old-k8s-version-326000 kubelet[31689]: F0226 11:49:12.671662   31689 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 26 11:49:12 old-k8s-version-326000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 26 11:49:12 old-k8s-version-326000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-326000 -n old-k8s-version-326000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-326000 -n old-k8s-version-326000: exit status 2 (399.699207ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-326000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (387.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0226 03:49:19.232605   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:49:55.613678   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 03:49:59.574573   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/false-722000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:50:46.927636   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/flannel-722000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:51:07.582517   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/enable-default-cni-722000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:51:23.949937   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/calico-722000/client.crt: no such file or directory
E0226 03:51:25.454106   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/skaffold-341000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:52:11.877036   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/bridge-722000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:52:25.764728   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/default-k8s-diff-port-145000/client.crt: no such file or directory
E0226 03:52:25.770864   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/default-k8s-diff-port-145000/client.crt: no such file or directory
E0226 03:52:25.783014   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/default-k8s-diff-port-145000/client.crt: no such file or directory
E0226 03:52:25.805224   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/default-k8s-diff-port-145000/client.crt: no such file or directory
E0226 03:52:25.845707   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/default-k8s-diff-port-145000/client.crt: no such file or directory
E0226 03:52:25.925994   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/default-k8s-diff-port-145000/client.crt: no such file or directory
E0226 03:52:26.088243   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/default-k8s-diff-port-145000/client.crt: no such file or directory
E0226 03:52:26.408602   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/default-k8s-diff-port-145000/client.crt: no such file or directory
E0226 03:52:27.078569   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/default-k8s-diff-port-145000/client.crt: no such file or directory
E0226 03:52:28.360811   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/default-k8s-diff-port-145000/client.crt: no such file or directory
E0226 03:52:30.922872   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/default-k8s-diff-port-145000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:52:36.044591   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/default-k8s-diff-port-145000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:52:46.286958   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/default-k8s-diff-port-145000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:52:59.969894   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
E0226 03:53:01.536429   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubenet-722000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:53:06.767312   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/default-k8s-diff-port-145000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:53:28.613809   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/no-preload-136000/client.crt: no such file or directory
E0226 03:53:32.570284   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:53:40.791029   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/custom-flannel-722000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:53:47.728886   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/default-k8s-diff-port-145000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:54:19.236723   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:54:43.781787   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kindnet-722000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:54:51.668014   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/no-preload-136000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:54:59.577396   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/false-722000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0226 03:55:09.649966   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/default-k8s-diff-port-145000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61948/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-326000 -n old-k8s-version-326000
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-326000 -n old-k8s-version-326000: exit status 2 (403.05206ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-326000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-326000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-326000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.84µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-326000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-326000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-326000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b",
	        "Created": "2024-02-26T11:25:41.957182514Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 393521,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-26T11:31:41.00994209Z",
	            "FinishedAt": "2024-02-26T11:31:38.07734926Z"
	        },
	        "Image": "sha256:78bbddd92c7656e5a7bf9651f59e6b47aa97241efd2e93247c457ec76b2185c5",
	        "ResolvConfPath": "/var/lib/docker/containers/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b/hostname",
	        "HostsPath": "/var/lib/docker/containers/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b/hosts",
	        "LogPath": "/var/lib/docker/containers/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b/76ad634e3f3ff5bf0b2a50c602c87984102ce0977c187500835978cf6a8c221b-json.log",
	        "Name": "/old-k8s-version-326000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-326000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-326000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/60f3f0a1425652d22ad9176e8be35f0bcc96402895ea787bafbfdf32dd7af953-init/diff:/var/lib/docker/overlay2/8bb839173c154892efba77c6399a35a6f861ea09086927d7a3ace9b08c2c0425/diff",
	                "MergedDir": "/var/lib/docker/overlay2/60f3f0a1425652d22ad9176e8be35f0bcc96402895ea787bafbfdf32dd7af953/merged",
	                "UpperDir": "/var/lib/docker/overlay2/60f3f0a1425652d22ad9176e8be35f0bcc96402895ea787bafbfdf32dd7af953/diff",
	                "WorkDir": "/var/lib/docker/overlay2/60f3f0a1425652d22ad9176e8be35f0bcc96402895ea787bafbfdf32dd7af953/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-326000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-326000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-326000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-326000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-326000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0cce2209868f05ead2592e14c5ef5a3aa02f6ae46d9e9259d358771c5c64dff0",
	            "SandboxKey": "/var/run/docker/netns/0cce2209868f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61949"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61950"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61946"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61947"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61948"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-326000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "76ad634e3f3f",
	                        "old-k8s-version-326000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "d3b7a706276d3d554def8a4b60d3b2d38626f3b90daf1316a28cf50ae9bb155f",
	                    "EndpointID": "899d13e21bc290a1d54cc28fa6907de86db58760bae6503215e004ce92f8f3f0",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-326000",
	                        "76ad634e3f3f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000: exit status 2 (411.012203ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-326000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-326000 logs -n 25: (1.389697292s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p embed-certs-624000                                  | embed-certs-624000           | jenkins | v1.32.0 | 26 Feb 24 03:41 PST | 26 Feb 24 03:41 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-624000                                  | embed-certs-624000           | jenkins | v1.32.0 | 26 Feb 24 03:41 PST | 26 Feb 24 03:41 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-624000                                  | embed-certs-624000           | jenkins | v1.32.0 | 26 Feb 24 03:41 PST | 26 Feb 24 03:41 PST |
	| delete  | -p embed-certs-624000                                  | embed-certs-624000           | jenkins | v1.32.0 | 26 Feb 24 03:41 PST | 26 Feb 24 03:41 PST |
	| delete  | -p                                                     | disable-driver-mounts-553000 | jenkins | v1.32.0 | 26 Feb 24 03:41 PST | 26 Feb 24 03:41 PST |
	|         | disable-driver-mounts-553000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-145000 | jenkins | v1.32.0 | 26 Feb 24 03:41 PST | 26 Feb 24 03:42 PST |
	|         | default-k8s-diff-port-145000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-145000  | default-k8s-diff-port-145000 | jenkins | v1.32.0 | 26 Feb 24 03:42 PST | 26 Feb 24 03:42 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-145000 | jenkins | v1.32.0 | 26 Feb 24 03:42 PST | 26 Feb 24 03:42 PST |
	|         | default-k8s-diff-port-145000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-145000       | default-k8s-diff-port-145000 | jenkins | v1.32.0 | 26 Feb 24 03:42 PST | 26 Feb 24 03:42 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-145000 | jenkins | v1.32.0 | 26 Feb 24 03:42 PST | 26 Feb 24 03:48 PST |
	|         | default-k8s-diff-port-145000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-145000                           | default-k8s-diff-port-145000 | jenkins | v1.32.0 | 26 Feb 24 03:48 PST | 26 Feb 24 03:48 PST |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-145000 | jenkins | v1.32.0 | 26 Feb 24 03:48 PST | 26 Feb 24 03:48 PST |
	|         | default-k8s-diff-port-145000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-145000 | jenkins | v1.32.0 | 26 Feb 24 03:48 PST | 26 Feb 24 03:48 PST |
	|         | default-k8s-diff-port-145000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-145000 | jenkins | v1.32.0 | 26 Feb 24 03:48 PST | 26 Feb 24 03:48 PST |
	|         | default-k8s-diff-port-145000                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-145000 | jenkins | v1.32.0 | 26 Feb 24 03:48 PST | 26 Feb 24 03:48 PST |
	|         | default-k8s-diff-port-145000                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-340000 --memory=2200 --alsologtostderr   | newest-cni-340000            | jenkins | v1.32.0 | 26 Feb 24 03:48 PST | 26 Feb 24 03:49 PST |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.29.0-rc.2     |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-340000             | newest-cni-340000            | jenkins | v1.32.0 | 26 Feb 24 03:49 PST | 26 Feb 24 03:49 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-340000                                   | newest-cni-340000            | jenkins | v1.32.0 | 26 Feb 24 03:49 PST | 26 Feb 24 03:49 PST |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-340000                  | newest-cni-340000            | jenkins | v1.32.0 | 26 Feb 24 03:49 PST | 26 Feb 24 03:49 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-340000 --memory=2200 --alsologtostderr   | newest-cni-340000            | jenkins | v1.32.0 | 26 Feb 24 03:49 PST | 26 Feb 24 03:49 PST |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.29.0-rc.2     |                              |         |         |                     |                     |
	| image   | newest-cni-340000 image list                           | newest-cni-340000            | jenkins | v1.32.0 | 26 Feb 24 03:49 PST | 26 Feb 24 03:49 PST |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-340000                                   | newest-cni-340000            | jenkins | v1.32.0 | 26 Feb 24 03:49 PST | 26 Feb 24 03:49 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-340000                                   | newest-cni-340000            | jenkins | v1.32.0 | 26 Feb 24 03:49 PST | 26 Feb 24 03:49 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-340000                                   | newest-cni-340000            | jenkins | v1.32.0 | 26 Feb 24 03:49 PST | 26 Feb 24 03:49 PST |
	| delete  | -p newest-cni-340000                                   | newest-cni-340000            | jenkins | v1.32.0 | 26 Feb 24 03:49 PST | 26 Feb 24 03:49 PST |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/26 03:49:09
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0226 03:49:09.053919   28149 out.go:291] Setting OutFile to fd 1 ...
	I0226 03:49:09.054180   28149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 03:49:09.054185   28149 out.go:304] Setting ErrFile to fd 2...
	I0226 03:49:09.054189   28149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 03:49:09.054371   28149 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18222-9538/.minikube/bin
	I0226 03:49:09.055812   28149 out.go:298] Setting JSON to false
	I0226 03:49:09.077723   28149 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":13720,"bootTime":1708934429,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0226 03:49:09.077817   28149 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0226 03:49:09.100429   28149 out.go:177] * [newest-cni-340000] minikube v1.32.0 on Darwin 14.3.1
	I0226 03:49:09.143096   28149 out.go:177]   - MINIKUBE_LOCATION=18222
	I0226 03:49:09.143135   28149 notify.go:220] Checking for updates...
	I0226 03:49:09.186003   28149 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18222-9538/kubeconfig
	I0226 03:49:09.207916   28149 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0226 03:49:09.229119   28149 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 03:49:09.271648   28149 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18222-9538/.minikube
	I0226 03:49:09.293191   28149 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0226 03:49:09.315295   28149 config.go:182] Loaded profile config "newest-cni-340000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0226 03:49:09.315695   28149 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 03:49:09.371030   28149 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0226 03:49:09.371181   28149 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 03:49:09.470038   28149 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-26 11:49:09.45939756 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=
cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker De
v Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) f
or an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0226 03:49:09.512601   28149 out.go:177] * Using the docker driver based on existing profile
	I0226 03:49:09.533667   28149 start.go:299] selected driver: docker
	I0226 03:49:09.533694   28149 start.go:903] validating driver "docker" against &{Name:newest-cni-340000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-340000 Namespace:default APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Liste
nAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 03:49:09.533824   28149 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0226 03:49:09.538224   28149 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 03:49:09.646702   28149 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-26 11:49:09.635205581 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0226 03:49:09.646926   28149 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0226 03:49:09.646982   28149 cni.go:84] Creating CNI manager for ""
	I0226 03:49:09.646997   28149 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0226 03:49:09.647006   28149 start_flags.go:323] config:
	{Name:newest-cni-340000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-340000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 03:49:09.689683   28149 out.go:177] * Starting control plane node newest-cni-340000 in cluster newest-cni-340000
	I0226 03:49:09.710846   28149 cache.go:121] Beginning downloading kic base image for docker with docker
	I0226 03:49:09.732640   28149 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0226 03:49:09.774755   28149 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0226 03:49:09.774799   28149 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0226 03:49:09.774806   28149 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0226 03:49:09.774815   28149 cache.go:56] Caching tarball of preloaded images
	I0226 03:49:09.774934   28149 preload.go:174] Found /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0226 03:49:09.774944   28149 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0226 03:49:09.775034   28149 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/newest-cni-340000/config.json ...
	I0226 03:49:09.825086   28149 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0226 03:49:09.825114   28149 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0226 03:49:09.825134   28149 cache.go:194] Successfully downloaded all kic artifacts
	I0226 03:49:09.825178   28149 start.go:365] acquiring machines lock for newest-cni-340000: {Name:mk9762481b056719b25a9fd40adb8839220055a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0226 03:49:09.825268   28149 start.go:369] acquired machines lock for "newest-cni-340000" in 71.662µs
	I0226 03:49:09.825291   28149 start.go:96] Skipping create...Using existing machine configuration
	I0226 03:49:09.825300   28149 fix.go:54] fixHost starting: 
	I0226 03:49:09.825537   28149 cli_runner.go:164] Run: docker container inspect newest-cni-340000 --format={{.State.Status}}
	I0226 03:49:09.875513   28149 fix.go:102] recreateIfNeeded on newest-cni-340000: state=Stopped err=<nil>
	W0226 03:49:09.875564   28149 fix.go:128] unexpected machine state, will restart: <nil>
	I0226 03:49:09.897313   28149 out.go:177] * Restarting existing docker container for "newest-cni-340000" ...
	I0226 03:49:09.941297   28149 cli_runner.go:164] Run: docker start newest-cni-340000
	I0226 03:49:10.180102   28149 cli_runner.go:164] Run: docker container inspect newest-cni-340000 --format={{.State.Status}}
	I0226 03:49:10.232335   28149 kic.go:430] container "newest-cni-340000" state is running.
	I0226 03:49:10.232934   28149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-340000
	I0226 03:49:10.286549   28149 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/newest-cni-340000/config.json ...
	I0226 03:49:10.286981   28149 machine.go:88] provisioning docker machine ...
	I0226 03:49:10.287012   28149 ubuntu.go:169] provisioning hostname "newest-cni-340000"
	I0226 03:49:10.287078   28149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340000
	I0226 03:49:10.346326   28149 main.go:141] libmachine: Using SSH client type: native
	I0226 03:49:10.346751   28149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x947e920] 0x9481680 <nil>  [] 0s} 127.0.0.1 62933 <nil> <nil>}
	I0226 03:49:10.346781   28149 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-340000 && echo "newest-cni-340000" | sudo tee /etc/hostname
	I0226 03:49:10.348415   28149 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0226 03:49:13.511580   28149 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-340000
	
	I0226 03:49:13.511688   28149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340000
	I0226 03:49:13.561707   28149 main.go:141] libmachine: Using SSH client type: native
	I0226 03:49:13.561875   28149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x947e920] 0x9481680 <nil>  [] 0s} 127.0.0.1 62933 <nil> <nil>}
	I0226 03:49:13.561887   28149 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-340000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-340000/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-340000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0226 03:49:13.700917   28149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0226 03:49:13.700946   28149 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/18222-9538/.minikube CaCertPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18222-9538/.minikube}
	I0226 03:49:13.700965   28149 ubuntu.go:177] setting up certificates
	I0226 03:49:13.700983   28149 provision.go:83] configureAuth start
	I0226 03:49:13.701067   28149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-340000
	I0226 03:49:13.751363   28149 provision.go:138] copyHostCerts
	I0226 03:49:13.751466   28149 exec_runner.go:144] found /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.pem, removing ...
	I0226 03:49:13.751477   28149 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.pem
	I0226 03:49:13.751616   28149 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.pem (1082 bytes)
	I0226 03:49:13.751860   28149 exec_runner.go:144] found /Users/jenkins/minikube-integration/18222-9538/.minikube/cert.pem, removing ...
	I0226 03:49:13.751866   28149 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18222-9538/.minikube/cert.pem
	I0226 03:49:13.751952   28149 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18222-9538/.minikube/cert.pem (1123 bytes)
	I0226 03:49:13.752137   28149 exec_runner.go:144] found /Users/jenkins/minikube-integration/18222-9538/.minikube/key.pem, removing ...
	I0226 03:49:13.752143   28149 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18222-9538/.minikube/key.pem
	I0226 03:49:13.752221   28149 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18222-9538/.minikube/key.pem (1675 bytes)
	I0226 03:49:13.752369   28149 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca-key.pem org=jenkins.newest-cni-340000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-340000]
	I0226 03:49:13.826837   28149 provision.go:172] copyRemoteCerts
	I0226 03:49:13.826897   28149 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0226 03:49:13.826951   28149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340000
	I0226 03:49:13.877856   28149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62933 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/newest-cni-340000/id_rsa Username:docker}
	I0226 03:49:13.983734   28149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0226 03:49:14.023499   28149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0226 03:49:14.063173   28149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0226 03:49:14.118412   28149 provision.go:86] duration metric: configureAuth took 417.410539ms
	I0226 03:49:14.118427   28149 ubuntu.go:193] setting minikube options for container-runtime
	I0226 03:49:14.118578   28149 config.go:182] Loaded profile config "newest-cni-340000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0226 03:49:14.118654   28149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340000
	I0226 03:49:14.170174   28149 main.go:141] libmachine: Using SSH client type: native
	I0226 03:49:14.170354   28149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x947e920] 0x9481680 <nil>  [] 0s} 127.0.0.1 62933 <nil> <nil>}
	I0226 03:49:14.170363   28149 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0226 03:49:14.307059   28149 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0226 03:49:14.307072   28149 ubuntu.go:71] root file system type: overlay
	I0226 03:49:14.307172   28149 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0226 03:49:14.307265   28149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340000
	I0226 03:49:14.357874   28149 main.go:141] libmachine: Using SSH client type: native
	I0226 03:49:14.358051   28149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x947e920] 0x9481680 <nil>  [] 0s} 127.0.0.1 62933 <nil> <nil>}
	I0226 03:49:14.358101   28149 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0226 03:49:14.520735   28149 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0226 03:49:14.520837   28149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340000
	I0226 03:49:14.571394   28149 main.go:141] libmachine: Using SSH client type: native
	I0226 03:49:14.571565   28149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x947e920] 0x9481680 <nil>  [] 0s} 127.0.0.1 62933 <nil> <nil>}
	I0226 03:49:14.571580   28149 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0226 03:49:14.718957   28149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0226 03:49:14.718976   28149 machine.go:91] provisioned docker machine in 4.431950192s
	I0226 03:49:14.718986   28149 start.go:300] post-start starting for "newest-cni-340000" (driver="docker")
	I0226 03:49:14.718997   28149 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0226 03:49:14.719087   28149 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0226 03:49:14.719149   28149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340000
	I0226 03:49:14.769332   28149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62933 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/newest-cni-340000/id_rsa Username:docker}
	I0226 03:49:14.874067   28149 ssh_runner.go:195] Run: cat /etc/os-release
	I0226 03:49:14.878395   28149 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0226 03:49:14.878420   28149 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0226 03:49:14.878427   28149 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0226 03:49:14.878432   28149 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0226 03:49:14.878442   28149 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18222-9538/.minikube/addons for local assets ...
	I0226 03:49:14.878558   28149 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18222-9538/.minikube/files for local assets ...
	I0226 03:49:14.878744   28149 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem -> 100262.pem in /etc/ssl/certs
	I0226 03:49:14.878972   28149 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0226 03:49:14.894588   28149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem --> /etc/ssl/certs/100262.pem (1708 bytes)
	I0226 03:49:14.935427   28149 start.go:303] post-start completed in 216.429371ms
	I0226 03:49:14.935502   28149 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0226 03:49:14.935563   28149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340000
	I0226 03:49:14.986001   28149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62933 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/newest-cni-340000/id_rsa Username:docker}
	I0226 03:49:15.079588   28149 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0226 03:49:15.084508   28149 fix.go:56] fixHost completed within 5.259162563s
	I0226 03:49:15.084529   28149 start.go:83] releasing machines lock for "newest-cni-340000", held for 5.259209856s
	I0226 03:49:15.084633   28149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-340000
	I0226 03:49:15.135515   28149 ssh_runner.go:195] Run: cat /version.json
	I0226 03:49:15.135525   28149 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0226 03:49:15.135588   28149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340000
	I0226 03:49:15.135604   28149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340000
	I0226 03:49:15.187694   28149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62933 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/newest-cni-340000/id_rsa Username:docker}
	I0226 03:49:15.187717   28149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62933 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/newest-cni-340000/id_rsa Username:docker}
	I0226 03:49:15.281054   28149 ssh_runner.go:195] Run: systemctl --version
	I0226 03:49:15.386192   28149 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0226 03:49:15.391503   28149 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0226 03:49:15.421492   28149 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0226 03:49:15.421560   28149 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0226 03:49:15.436243   28149 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0226 03:49:15.436265   28149 start.go:475] detecting cgroup driver to use...
	I0226 03:49:15.436278   28149 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0226 03:49:15.436395   28149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0226 03:49:15.464377   28149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0226 03:49:15.480174   28149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0226 03:49:15.496252   28149 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0226 03:49:15.496323   28149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0226 03:49:15.512451   28149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0226 03:49:15.528527   28149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0226 03:49:15.544551   28149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0226 03:49:15.561755   28149 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0226 03:49:15.577534   28149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0226 03:49:15.593807   28149 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0226 03:49:15.610343   28149 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0226 03:49:15.626568   28149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 03:49:15.685506   28149 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0226 03:49:15.774117   28149 start.go:475] detecting cgroup driver to use...
	I0226 03:49:15.774137   28149 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0226 03:49:15.774208   28149 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0226 03:49:15.791818   28149 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0226 03:49:15.791890   28149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0226 03:49:15.810640   28149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0226 03:49:15.839579   28149 ssh_runner.go:195] Run: which cri-dockerd
	I0226 03:49:15.844326   28149 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0226 03:49:15.860126   28149 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0226 03:49:15.894216   28149 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0226 03:49:15.970767   28149 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0226 03:49:16.062706   28149 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0226 03:49:16.062812   28149 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0226 03:49:16.092044   28149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 03:49:16.158528   28149 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0226 03:49:16.452476   28149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0226 03:49:16.469874   28149 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0226 03:49:16.488985   28149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0226 03:49:16.506047   28149 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0226 03:49:16.564379   28149 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0226 03:49:16.626136   28149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 03:49:16.687445   28149 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0226 03:49:16.720007   28149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0226 03:49:16.737034   28149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 03:49:16.799003   28149 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0226 03:49:16.887474   28149 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0226 03:49:16.887559   28149 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0226 03:49:16.892985   28149 start.go:543] Will wait 60s for crictl version
	I0226 03:49:16.893039   28149 ssh_runner.go:195] Run: which crictl
	I0226 03:49:16.896859   28149 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0226 03:49:16.946532   28149 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.3
	RuntimeApiVersion:  v1
	I0226 03:49:16.946613   28149 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0226 03:49:16.968176   28149 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0226 03:49:17.014356   28149 out.go:204] * Preparing Kubernetes v1.29.0-rc.2 on Docker 25.0.3 ...
	I0226 03:49:17.014507   28149 cli_runner.go:164] Run: docker exec -t newest-cni-340000 dig +short host.docker.internal
	I0226 03:49:17.119046   28149 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0226 03:49:17.119156   28149 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0226 03:49:17.124042   28149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 03:49:17.142308   28149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-340000
	I0226 03:49:17.213486   28149 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0226 03:49:17.237464   28149 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0226 03:49:17.237649   28149 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0226 03:49:17.257863   28149 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0226 03:49:17.257885   28149 docker.go:615] Images already preloaded, skipping extraction
	I0226 03:49:17.257956   28149 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0226 03:49:17.275709   28149 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0226 03:49:17.275726   28149 cache_images.go:84] Images are preloaded, skipping loading
	I0226 03:49:17.275823   28149 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0226 03:49:17.320906   28149 cni.go:84] Creating CNI manager for ""
	I0226 03:49:17.320925   28149 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0226 03:49:17.320944   28149 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0226 03:49:17.320960   28149 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-340000 NodeName:newest-cni-340000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs
:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0226 03:49:17.321086   28149 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-340000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0226 03:49:17.321169   28149 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-340000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-340000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0226 03:49:17.321230   28149 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0226 03:49:17.336225   28149 binaries.go:44] Found k8s binaries, skipping transfer
	I0226 03:49:17.336296   28149 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0226 03:49:17.350811   28149 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (420 bytes)
	I0226 03:49:17.379357   28149 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0226 03:49:17.407887   28149 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I0226 03:49:17.436181   28149 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0226 03:49:17.440285   28149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 03:49:17.457147   28149 certs.go:56] Setting up /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/newest-cni-340000 for IP: 192.168.67.2
	I0226 03:49:17.457168   28149 certs.go:190] acquiring lock for shared ca certs: {Name:mkac1efdcc7c5f1039385f86b148562f7ea05475 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:49:17.457345   28149 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.key
	I0226 03:49:17.457418   28149 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18222-9538/.minikube/proxy-client-ca.key
	I0226 03:49:17.457506   28149 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/newest-cni-340000/client.key
	I0226 03:49:17.457582   28149 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/newest-cni-340000/apiserver.key.c7fa3a9e
	I0226 03:49:17.457649   28149 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/newest-cni-340000/proxy-client.key
	I0226 03:49:17.457871   28149 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/10026.pem (1338 bytes)
	W0226 03:49:17.457914   28149 certs.go:433] ignoring /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/10026_empty.pem, impossibly tiny 0 bytes
	I0226 03:49:17.457923   28149 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca-key.pem (1675 bytes)
	I0226 03:49:17.457954   28149 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem (1082 bytes)
	I0226 03:49:17.457987   28149 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/cert.pem (1123 bytes)
	I0226 03:49:17.458016   28149 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/key.pem (1675 bytes)
	I0226 03:49:17.458083   28149 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem (1708 bytes)
	I0226 03:49:17.458687   28149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/newest-cni-340000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0226 03:49:17.498786   28149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/newest-cni-340000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0226 03:49:17.539541   28149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/newest-cni-340000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0226 03:49:17.579275   28149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/newest-cni-340000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0226 03:49:17.619396   28149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0226 03:49:17.661345   28149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0226 03:49:17.701278   28149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0226 03:49:17.741405   28149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0226 03:49:17.781148   28149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/10026.pem --> /usr/share/ca-certificates/10026.pem (1338 bytes)
	I0226 03:49:17.820665   28149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem --> /usr/share/ca-certificates/100262.pem (1708 bytes)
	I0226 03:49:17.861908   28149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0226 03:49:17.906503   28149 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0226 03:49:17.936938   28149 ssh_runner.go:195] Run: openssl version
	I0226 03:49:17.943086   28149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100262.pem && ln -fs /usr/share/ca-certificates/100262.pem /etc/ssl/certs/100262.pem"
	I0226 03:49:17.959319   28149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100262.pem
	I0226 03:49:17.964693   28149 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 26 10:36 /usr/share/ca-certificates/100262.pem
	I0226 03:49:17.964750   28149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100262.pem
	I0226 03:49:17.971430   28149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100262.pem /etc/ssl/certs/3ec20f2e.0"
	I0226 03:49:17.986536   28149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0226 03:49:18.002230   28149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0226 03:49:18.006506   28149 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 26 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0226 03:49:18.006553   28149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0226 03:49:18.013006   28149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0226 03:49:18.027526   28149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10026.pem && ln -fs /usr/share/ca-certificates/10026.pem /etc/ssl/certs/10026.pem"
	I0226 03:49:18.043269   28149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10026.pem
	I0226 03:49:18.047597   28149 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 26 10:36 /usr/share/ca-certificates/10026.pem
	I0226 03:49:18.047644   28149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10026.pem
	I0226 03:49:18.054467   28149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10026.pem /etc/ssl/certs/51391683.0"
	I0226 03:49:18.069457   28149 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0226 03:49:18.073746   28149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0226 03:49:18.080234   28149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0226 03:49:18.086568   28149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0226 03:49:18.093037   28149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0226 03:49:18.099327   28149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0226 03:49:18.105560   28149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0226 03:49:18.111724   28149 kubeadm.go:404] StartCluster: {Name:newest-cni-340000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-340000 Namespace:default APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: S
ubnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 03:49:18.111848   28149 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0226 03:49:18.128624   28149 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0226 03:49:18.143378   28149 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0226 03:49:18.143394   28149 kubeadm.go:636] restartCluster start
	I0226 03:49:18.143454   28149 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0226 03:49:18.158004   28149 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:49:18.158086   28149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-340000
	I0226 03:49:18.209000   28149 kubeconfig.go:135] verify returned: extract IP: "newest-cni-340000" does not appear in /Users/jenkins/minikube-integration/18222-9538/kubeconfig
	I0226 03:49:18.209158   28149 kubeconfig.go:146] "newest-cni-340000" context is missing from /Users/jenkins/minikube-integration/18222-9538/kubeconfig - will repair!
	I0226 03:49:18.209479   28149 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/kubeconfig: {Name:mk55c402e0c5e83ba737512b9e22b403be7d3c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:49:18.210866   28149 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0226 03:49:18.226305   28149 api_server.go:166] Checking apiserver status ...
	I0226 03:49:18.226358   28149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:49:18.242957   28149 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:49:18.728596   28149 api_server.go:166] Checking apiserver status ...
	I0226 03:49:18.728720   28149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:49:18.747221   28149 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:49:19.228447   28149 api_server.go:166] Checking apiserver status ...
	I0226 03:49:19.228611   28149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:49:19.247110   28149 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:49:19.728401   28149 api_server.go:166] Checking apiserver status ...
	I0226 03:49:19.728559   28149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:49:19.747995   28149 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:49:20.227088   28149 api_server.go:166] Checking apiserver status ...
	I0226 03:49:20.227224   28149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:49:20.245218   28149 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:49:20.727234   28149 api_server.go:166] Checking apiserver status ...
	I0226 03:49:20.727343   28149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:49:20.745364   28149 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:49:21.227501   28149 api_server.go:166] Checking apiserver status ...
	I0226 03:49:21.227656   28149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:49:21.246271   28149 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:49:21.726464   28149 api_server.go:166] Checking apiserver status ...
	I0226 03:49:21.726626   28149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:49:21.744386   28149 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:49:22.227588   28149 api_server.go:166] Checking apiserver status ...
	I0226 03:49:22.227715   28149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:49:22.246770   28149 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:49:22.727573   28149 api_server.go:166] Checking apiserver status ...
	I0226 03:49:22.727729   28149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:49:22.746329   28149 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:49:23.227119   28149 api_server.go:166] Checking apiserver status ...
	I0226 03:49:23.227201   28149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:49:23.243540   28149 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:49:23.726520   28149 api_server.go:166] Checking apiserver status ...
	I0226 03:49:23.726649   28149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:49:23.745318   28149 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:49:24.226509   28149 api_server.go:166] Checking apiserver status ...
	I0226 03:49:24.226616   28149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:49:24.244262   28149 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:49:24.726516   28149 api_server.go:166] Checking apiserver status ...
	I0226 03:49:24.726631   28149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:49:24.743639   28149 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:49:25.228141   28149 api_server.go:166] Checking apiserver status ...
	I0226 03:49:25.228233   28149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:49:25.245504   28149 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:49:25.728199   28149 api_server.go:166] Checking apiserver status ...
	I0226 03:49:25.728385   28149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:49:25.747187   28149 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:49:26.228453   28149 api_server.go:166] Checking apiserver status ...
	I0226 03:49:26.228625   28149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:49:26.248976   28149 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:49:26.726454   28149 api_server.go:166] Checking apiserver status ...
	I0226 03:49:26.726580   28149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:49:26.744502   28149 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:49:27.227273   28149 api_server.go:166] Checking apiserver status ...
	I0226 03:49:27.227379   28149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:49:27.244819   28149 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:49:27.727330   28149 api_server.go:166] Checking apiserver status ...
	I0226 03:49:27.727437   28149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:49:27.745993   28149 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:49:28.228503   28149 api_server.go:166] Checking apiserver status ...
	I0226 03:49:28.228667   28149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 03:49:28.247968   28149 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:49:28.247987   28149 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0226 03:49:28.248003   28149 kubeadm.go:1135] stopping kube-system containers ...
	I0226 03:49:28.248071   28149 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0226 03:49:28.265911   28149 docker.go:483] Stopping containers: [3d9ac42d9923 815507f55707 4c1be9de28db 5e797a68f8d0 946efd1e19b6 d7ec00229d56 480e02723526 9797d63b7b74 be4f75ff7e15 8657529c84d5 fb6319965640 7fbc3ccfe889 2c2f7223b32a 6f5373d69f3e efcc9c269ee9]
	I0226 03:49:28.265996   28149 ssh_runner.go:195] Run: docker stop 3d9ac42d9923 815507f55707 4c1be9de28db 5e797a68f8d0 946efd1e19b6 d7ec00229d56 480e02723526 9797d63b7b74 be4f75ff7e15 8657529c84d5 fb6319965640 7fbc3ccfe889 2c2f7223b32a 6f5373d69f3e efcc9c269ee9
	I0226 03:49:28.284997   28149 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0226 03:49:28.302591   28149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0226 03:49:28.318070   28149 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5651 Feb 26 11:48 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Feb 26 11:48 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Feb 26 11:48 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Feb 26 11:48 /etc/kubernetes/scheduler.conf
	
	I0226 03:49:28.318140   28149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0226 03:49:28.333008   28149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0226 03:49:28.349415   28149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0226 03:49:28.364184   28149 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:49:28.364246   28149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0226 03:49:28.378905   28149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0226 03:49:28.393635   28149 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0226 03:49:28.393695   28149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0226 03:49:28.408284   28149 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0226 03:49:28.423617   28149 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0226 03:49:28.423633   28149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0226 03:49:28.475026   28149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0226 03:49:29.096819   28149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0226 03:49:29.228614   28149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0226 03:49:29.285546   28149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0226 03:49:29.371318   28149 api_server.go:52] waiting for apiserver process to appear ...
	I0226 03:49:29.371397   28149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:49:29.871538   28149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:49:30.372154   28149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:49:30.444785   28149 api_server.go:72] duration metric: took 1.073460052s to wait for apiserver process to appear ...
	I0226 03:49:30.444805   28149 api_server.go:88] waiting for apiserver healthz status ...
	I0226 03:49:30.444844   28149 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62932/healthz ...
	I0226 03:49:32.739891   28149 api_server.go:279] https://127.0.0.1:62932/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0226 03:49:32.739920   28149 api_server.go:103] status: https://127.0.0.1:62932/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0226 03:49:32.739935   28149 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62932/healthz ...
	I0226 03:49:32.839109   28149 api_server.go:279] https://127.0.0.1:62932/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 03:49:32.839137   28149 api_server.go:103] status: https://127.0.0.1:62932/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 03:49:32.945145   28149 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62932/healthz ...
	I0226 03:49:32.950822   28149 api_server.go:279] https://127.0.0.1:62932/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 03:49:32.950838   28149 api_server.go:103] status: https://127.0.0.1:62932/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 03:49:33.445304   28149 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62932/healthz ...
	I0226 03:49:33.451537   28149 api_server.go:279] https://127.0.0.1:62932/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 03:49:33.451553   28149 api_server.go:103] status: https://127.0.0.1:62932/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 03:49:33.945357   28149 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62932/healthz ...
	I0226 03:49:34.026819   28149 api_server.go:279] https://127.0.0.1:62932/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 03:49:34.026854   28149 api_server.go:103] status: https://127.0.0.1:62932/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 03:49:34.444991   28149 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62932/healthz ...
	I0226 03:49:34.450419   28149 api_server.go:279] https://127.0.0.1:62932/healthz returned 200:
	ok
	I0226 03:49:34.457961   28149 api_server.go:141] control plane version: v1.29.0-rc.2
	I0226 03:49:34.457979   28149 api_server.go:131] duration metric: took 4.013119013s to wait for apiserver health ...
	I0226 03:49:34.457986   28149 cni.go:84] Creating CNI manager for ""
	I0226 03:49:34.457997   28149 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0226 03:49:34.483144   28149 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0226 03:49:34.502830   28149 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0226 03:49:34.518443   28149 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0226 03:49:34.548395   28149 system_pods.go:43] waiting for kube-system pods to appear ...
	I0226 03:49:34.558356   28149 system_pods.go:59] 8 kube-system pods found
	I0226 03:49:34.558377   28149 system_pods.go:61] "coredns-76f75df574-xh4qk" [0c2276c2-4e7e-4cb4-8d5e-bfa8af452623] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0226 03:49:34.558383   28149 system_pods.go:61] "etcd-newest-cni-340000" [b9f7885d-4251-4b5d-ab75-a4cafc84e9b9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0226 03:49:34.558389   28149 system_pods.go:61] "kube-apiserver-newest-cni-340000" [9ffbb2b6-79f6-4c57-9820-3cd9032c02a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0226 03:49:34.558396   28149 system_pods.go:61] "kube-controller-manager-newest-cni-340000" [a2045483-ed38-45a6-b707-de4d77129946] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0226 03:49:34.558402   28149 system_pods.go:61] "kube-proxy-k78hf" [358beba7-53a7-4656-b6be-93a9c14968a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0226 03:49:34.558406   28149 system_pods.go:61] "kube-scheduler-newest-cni-340000" [485139b4-5951-4925-b116-c015d446242f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0226 03:49:34.558413   28149 system_pods.go:61] "metrics-server-57f55c9bc5-kz2n9" [beab7690-d2d2-478c-9e52-f9980c52c092] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0226 03:49:34.558418   28149 system_pods.go:61] "storage-provisioner" [c5cc590a-32df-4e8e-ae3d-91c371f01f3b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0226 03:49:34.558423   28149 system_pods.go:74] duration metric: took 10.015293ms to wait for pod list to return data ...
	I0226 03:49:34.558429   28149 node_conditions.go:102] verifying NodePressure condition ...
	I0226 03:49:34.562003   28149 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0226 03:49:34.562023   28149 node_conditions.go:123] node cpu capacity is 12
	I0226 03:49:34.562034   28149 node_conditions.go:105] duration metric: took 3.600617ms to run NodePressure ...
	I0226 03:49:34.562055   28149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0226 03:49:34.821077   28149 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0226 03:49:34.829541   28149 ops.go:34] apiserver oom_adj: -16
	I0226 03:49:34.829564   28149 kubeadm.go:640] restartCluster took 16.686016552s
	I0226 03:49:34.829573   28149 kubeadm.go:406] StartCluster complete in 16.71771645s
	I0226 03:49:34.829587   28149 settings.go:142] acquiring lock: {Name:mka913612bc349b92ac5926f4ed5df6954261df0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:49:34.829674   28149 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18222-9538/kubeconfig
	I0226 03:49:34.830315   28149 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/kubeconfig: {Name:mk55c402e0c5e83ba737512b9e22b403be7d3c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 03:49:34.830726   28149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0226 03:49:34.830745   28149 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0226 03:49:34.830842   28149 addons.go:69] Setting default-storageclass=true in profile "newest-cni-340000"
	I0226 03:49:34.830860   28149 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-340000"
	I0226 03:49:34.830887   28149 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-340000"
	I0226 03:49:34.830897   28149 addons.go:69] Setting metrics-server=true in profile "newest-cni-340000"
	I0226 03:49:34.830911   28149 addons.go:234] Setting addon metrics-server=true in "newest-cni-340000"
	W0226 03:49:34.830917   28149 addons.go:243] addon metrics-server should already be in state true
	I0226 03:49:34.830919   28149 addons.go:69] Setting dashboard=true in profile "newest-cni-340000"
	I0226 03:49:34.830960   28149 host.go:66] Checking if "newest-cni-340000" exists ...
	I0226 03:49:34.830970   28149 addons.go:234] Setting addon dashboard=true in "newest-cni-340000"
	W0226 03:49:34.831012   28149 addons.go:243] addon dashboard should already be in state true
	I0226 03:49:34.831014   28149 config.go:182] Loaded profile config "newest-cni-340000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0226 03:49:34.831069   28149 host.go:66] Checking if "newest-cni-340000" exists ...
	I0226 03:49:34.830889   28149 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-340000"
	W0226 03:49:34.831181   28149 addons.go:243] addon storage-provisioner should already be in state true
	I0226 03:49:34.831270   28149 host.go:66] Checking if "newest-cni-340000" exists ...
	I0226 03:49:34.831339   28149 cli_runner.go:164] Run: docker container inspect newest-cni-340000 --format={{.State.Status}}
	I0226 03:49:34.831464   28149 cli_runner.go:164] Run: docker container inspect newest-cni-340000 --format={{.State.Status}}
	I0226 03:49:34.832571   28149 cli_runner.go:164] Run: docker container inspect newest-cni-340000 --format={{.State.Status}}
	I0226 03:49:34.833164   28149 cli_runner.go:164] Run: docker container inspect newest-cni-340000 --format={{.State.Status}}
	I0226 03:49:34.840660   28149 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-340000" context rescaled to 1 replicas
	I0226 03:49:34.840713   28149 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0226 03:49:34.863937   28149 out.go:177] * Verifying Kubernetes components...
	I0226 03:49:34.906967   28149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 03:49:34.937975   28149 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0226 03:49:34.958775   28149 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 03:49:34.979873   28149 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0226 03:49:34.917262   28149 addons.go:234] Setting addon default-storageclass=true in "newest-cni-340000"
	I0226 03:49:34.957760   28149 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0226 03:49:34.957815   28149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-340000
	W0226 03:49:35.001080   28149 addons.go:243] addon default-storageclass should already be in state true
	I0226 03:49:35.021942   28149 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0226 03:49:35.043254   28149 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0226 03:49:35.043278   28149 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0226 03:49:35.064001   28149 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0226 03:49:35.043298   28149 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0226 03:49:35.064018   28149 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0226 03:49:35.064026   28149 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0226 03:49:35.043326   28149 host.go:66] Checking if "newest-cni-340000" exists ...
	I0226 03:49:35.064073   28149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340000
	I0226 03:49:35.064082   28149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340000
	I0226 03:49:35.064090   28149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340000
	I0226 03:49:35.066226   28149 cli_runner.go:164] Run: docker container inspect newest-cni-340000 --format={{.State.Status}}
	I0226 03:49:35.075884   28149 api_server.go:52] waiting for apiserver process to appear ...
	I0226 03:49:35.075990   28149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 03:49:35.097209   28149 api_server.go:72] duration metric: took 256.439576ms to wait for apiserver process to appear ...
	I0226 03:49:35.097235   28149 api_server.go:88] waiting for apiserver healthz status ...
	I0226 03:49:35.097275   28149 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62932/healthz ...
	I0226 03:49:35.104313   28149 api_server.go:279] https://127.0.0.1:62932/healthz returned 200:
	ok
	I0226 03:49:35.106191   28149 api_server.go:141] control plane version: v1.29.0-rc.2
	I0226 03:49:35.106208   28149 api_server.go:131] duration metric: took 8.966754ms to wait for apiserver health ...
	I0226 03:49:35.106214   28149 system_pods.go:43] waiting for kube-system pods to appear ...
	I0226 03:49:35.112768   28149 system_pods.go:59] 8 kube-system pods found
	I0226 03:49:35.112788   28149 system_pods.go:61] "coredns-76f75df574-xh4qk" [0c2276c2-4e7e-4cb4-8d5e-bfa8af452623] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0226 03:49:35.112794   28149 system_pods.go:61] "etcd-newest-cni-340000" [b9f7885d-4251-4b5d-ab75-a4cafc84e9b9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0226 03:49:35.112804   28149 system_pods.go:61] "kube-apiserver-newest-cni-340000" [9ffbb2b6-79f6-4c57-9820-3cd9032c02a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0226 03:49:35.112808   28149 system_pods.go:61] "kube-controller-manager-newest-cni-340000" [a2045483-ed38-45a6-b707-de4d77129946] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0226 03:49:35.112815   28149 system_pods.go:61] "kube-proxy-k78hf" [358beba7-53a7-4656-b6be-93a9c14968a0] Running
	I0226 03:49:35.112819   28149 system_pods.go:61] "kube-scheduler-newest-cni-340000" [485139b4-5951-4925-b116-c015d446242f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0226 03:49:35.112824   28149 system_pods.go:61] "metrics-server-57f55c9bc5-kz2n9" [beab7690-d2d2-478c-9e52-f9980c52c092] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0226 03:49:35.112829   28149 system_pods.go:61] "storage-provisioner" [c5cc590a-32df-4e8e-ae3d-91c371f01f3b] Running
	I0226 03:49:35.112841   28149 system_pods.go:74] duration metric: took 6.622954ms to wait for pod list to return data ...
	I0226 03:49:35.112847   28149 default_sa.go:34] waiting for default service account to be created ...
	I0226 03:49:35.117531   28149 default_sa.go:45] found service account: "default"
	I0226 03:49:35.117553   28149 default_sa.go:55] duration metric: took 4.696116ms for default service account to be created ...
	I0226 03:49:35.117562   28149 kubeadm.go:581] duration metric: took 276.801663ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0226 03:49:35.117574   28149 node_conditions.go:102] verifying NodePressure condition ...
	I0226 03:49:35.122081   28149 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0226 03:49:35.122098   28149 node_conditions.go:123] node cpu capacity is 12
	I0226 03:49:35.122108   28149 node_conditions.go:105] duration metric: took 4.529671ms to run NodePressure ...
	I0226 03:49:35.122122   28149 start.go:228] waiting for startup goroutines ...
	I0226 03:49:35.128217   28149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62933 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/newest-cni-340000/id_rsa Username:docker}
	I0226 03:49:35.128217   28149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62933 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/newest-cni-340000/id_rsa Username:docker}
	I0226 03:49:35.129284   28149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62933 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/newest-cni-340000/id_rsa Username:docker}
	I0226 03:49:35.132131   28149 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0226 03:49:35.132144   28149 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0226 03:49:35.132217   28149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340000
	I0226 03:49:35.184853   28149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62933 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/newest-cni-340000/id_rsa Username:docker}
	I0226 03:49:35.244835   28149 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0226 03:49:35.247652   28149 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0226 03:49:35.247663   28149 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0226 03:49:35.250022   28149 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0226 03:49:35.250035   28149 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0226 03:49:35.279316   28149 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0226 03:49:35.279334   28149 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0226 03:49:35.280755   28149 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0226 03:49:35.280771   28149 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0226 03:49:35.303017   28149 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0226 03:49:35.330981   28149 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0226 03:49:35.331006   28149 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0226 03:49:35.331015   28149 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0226 03:49:35.331040   28149 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0226 03:49:35.365902   28149 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0226 03:49:35.365915   28149 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0226 03:49:35.365925   28149 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0226 03:49:35.461291   28149 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0226 03:49:35.461305   28149 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0226 03:49:35.633125   28149 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0226 03:49:35.633162   28149 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0226 03:49:35.731147   28149 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0226 03:49:35.731169   28149 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0226 03:49:35.826445   28149 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0226 03:49:35.826471   28149 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0226 03:49:35.862786   28149 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0226 03:49:35.862801   28149 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0226 03:49:35.957157   28149 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0226 03:49:36.346129   28149 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.101255866s)
	I0226 03:49:36.346155   28149 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.043109777s)
	I0226 03:49:36.530383   28149 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.164437488s)
	I0226 03:49:36.530406   28149 addons.go:470] Verifying addon metrics-server=true in "newest-cni-340000"
	I0226 03:49:36.782941   28149 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-340000 addons enable metrics-server
	
	I0226 03:49:36.804026   28149 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0226 03:49:36.826983   28149 addons.go:505] enable addons completed in 1.996225294s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0226 03:49:36.827042   28149 start.go:233] waiting for cluster config update ...
	I0226 03:49:36.827065   28149 start.go:242] writing updated cluster config ...
	I0226 03:49:36.848997   28149 ssh_runner.go:195] Run: rm -f paused
	I0226 03:49:36.891388   28149 start.go:601] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0226 03:49:36.912993   28149 out.go:177] * Done! kubectl is now configured to use "newest-cni-340000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 26 11:31:47 old-k8s-version-326000 dockerd[742]: time="2024-02-26T11:31:47.086544831Z" level=info msg="Loading containers: start."
	Feb 26 11:31:47 old-k8s-version-326000 dockerd[742]: time="2024-02-26T11:31:47.173459100Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 26 11:31:47 old-k8s-version-326000 dockerd[742]: time="2024-02-26T11:31:47.209710579Z" level=info msg="Loading containers: done."
	Feb 26 11:31:47 old-k8s-version-326000 dockerd[742]: time="2024-02-26T11:31:47.217348090Z" level=info msg="Docker daemon" commit=f417435 containerd-snapshotter=false storage-driver=overlay2 version=25.0.3
	Feb 26 11:31:47 old-k8s-version-326000 dockerd[742]: time="2024-02-26T11:31:47.217413387Z" level=info msg="Daemon has completed initialization"
	Feb 26 11:31:47 old-k8s-version-326000 dockerd[742]: time="2024-02-26T11:31:47.236046872Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 26 11:31:47 old-k8s-version-326000 dockerd[742]: time="2024-02-26T11:31:47.236201204Z" level=info msg="API listen on [::]:2376"
	Feb 26 11:31:47 old-k8s-version-326000 systemd[1]: Started Docker Application Container Engine.
	Feb 26 11:31:55 old-k8s-version-326000 systemd[1]: Stopping Docker Application Container Engine...
	Feb 26 11:31:55 old-k8s-version-326000 dockerd[742]: time="2024-02-26T11:31:55.417082278Z" level=info msg="Processing signal 'terminated'"
	Feb 26 11:31:55 old-k8s-version-326000 dockerd[742]: time="2024-02-26T11:31:55.417994569Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 26 11:31:55 old-k8s-version-326000 dockerd[742]: time="2024-02-26T11:31:55.418234047Z" level=info msg="Daemon shutdown complete"
	Feb 26 11:31:55 old-k8s-version-326000 systemd[1]: docker.service: Deactivated successfully.
	Feb 26 11:31:55 old-k8s-version-326000 systemd[1]: Stopped Docker Application Container Engine.
	Feb 26 11:31:55 old-k8s-version-326000 systemd[1]: Starting Docker Application Container Engine...
	Feb 26 11:31:55 old-k8s-version-326000 dockerd[980]: time="2024-02-26T11:31:55.472955974Z" level=info msg="Starting up"
	Feb 26 11:31:55 old-k8s-version-326000 dockerd[980]: time="2024-02-26T11:31:55.716756590Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 26 11:31:55 old-k8s-version-326000 dockerd[980]: time="2024-02-26T11:31:55.814690642Z" level=info msg="Loading containers: start."
	Feb 26 11:31:55 old-k8s-version-326000 dockerd[980]: time="2024-02-26T11:31:55.933857505Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 26 11:31:55 old-k8s-version-326000 dockerd[980]: time="2024-02-26T11:31:55.971477614Z" level=info msg="Loading containers: done."
	Feb 26 11:31:55 old-k8s-version-326000 dockerd[980]: time="2024-02-26T11:31:55.979740207Z" level=info msg="Docker daemon" commit=f417435 containerd-snapshotter=false storage-driver=overlay2 version=25.0.3
	Feb 26 11:31:55 old-k8s-version-326000 dockerd[980]: time="2024-02-26T11:31:55.979804644Z" level=info msg="Daemon has completed initialization"
	Feb 26 11:31:56 old-k8s-version-326000 dockerd[980]: time="2024-02-26T11:31:56.003600672Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 26 11:31:56 old-k8s-version-326000 dockerd[980]: time="2024-02-26T11:31:56.003787673Z" level=info msg="API listen on [::]:2376"
	Feb 26 11:31:56 old-k8s-version-326000 systemd[1]: Started Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2024-02-26T11:55:40Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 11:55:40 up  1:28,  0 users,  load average: 3.46, 3.71, 4.35
	Linux old-k8s-version-326000 6.6.12-linuxkit #1 SMP PREEMPT_DYNAMIC Tue Jan 30 09:48:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kubelet <==
	Feb 26 11:55:38 old-k8s-version-326000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 26 11:55:39 old-k8s-version-326000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1352.
	Feb 26 11:55:39 old-k8s-version-326000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 26 11:55:39 old-k8s-version-326000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 26 11:55:39 old-k8s-version-326000 kubelet[40160]: I0226 11:55:39.414865   40160 server.go:410] Version: v1.16.0
	Feb 26 11:55:39 old-k8s-version-326000 kubelet[40160]: I0226 11:55:39.415112   40160 plugins.go:100] No cloud provider specified.
	Feb 26 11:55:39 old-k8s-version-326000 kubelet[40160]: I0226 11:55:39.415122   40160 server.go:773] Client rotation is on, will bootstrap in background
	Feb 26 11:55:39 old-k8s-version-326000 kubelet[40160]: I0226 11:55:39.416766   40160 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 26 11:55:39 old-k8s-version-326000 kubelet[40160]: W0226 11:55:39.417333   40160 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 26 11:55:39 old-k8s-version-326000 kubelet[40160]: W0226 11:55:39.417393   40160 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 26 11:55:39 old-k8s-version-326000 kubelet[40160]: F0226 11:55:39.417416   40160 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 26 11:55:39 old-k8s-version-326000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 26 11:55:39 old-k8s-version-326000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 26 11:55:40 old-k8s-version-326000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1353.
	Feb 26 11:55:40 old-k8s-version-326000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 26 11:55:40 old-k8s-version-326000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 26 11:55:40 old-k8s-version-326000 kubelet[40247]: I0226 11:55:40.172923   40247 server.go:410] Version: v1.16.0
	Feb 26 11:55:40 old-k8s-version-326000 kubelet[40247]: I0226 11:55:40.173205   40247 plugins.go:100] No cloud provider specified.
	Feb 26 11:55:40 old-k8s-version-326000 kubelet[40247]: I0226 11:55:40.173217   40247 server.go:773] Client rotation is on, will bootstrap in background
	Feb 26 11:55:40 old-k8s-version-326000 kubelet[40247]: I0226 11:55:40.174928   40247 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 26 11:55:40 old-k8s-version-326000 kubelet[40247]: W0226 11:55:40.175500   40247 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 26 11:55:40 old-k8s-version-326000 kubelet[40247]: W0226 11:55:40.175557   40247 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 26 11:55:40 old-k8s-version-326000 kubelet[40247]: F0226 11:55:40.175579   40247 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 26 11:55:40 old-k8s-version-326000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 26 11:55:40 old-k8s-version-326000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-326000 -n old-k8s-version-326000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-326000 -n old-k8s-version-326000: exit status 2 (406.490877ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-326000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (387.99s)

                                                
                                    

Test pass (301/333)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 24.06
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.35
9 TestDownloadOnly/v1.16.0/DeleteAll 0.64
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.38
12 TestDownloadOnly/v1.28.4/json-events 22.28
13 TestDownloadOnly/v1.28.4/preload-exists 0
16 TestDownloadOnly/v1.28.4/kubectl 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.33
18 TestDownloadOnly/v1.28.4/DeleteAll 0.66
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.38
21 TestDownloadOnly/v1.29.0-rc.2/json-events 21.05
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.37
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.66
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.38
29 TestDownloadOnlyKic 1.94
30 TestBinaryMirror 1.61
31 TestOffline 41.54
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.2
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.18
36 TestAddons/Setup 226.2
40 TestAddons/parallel/InspektorGadget 10.97
41 TestAddons/parallel/MetricsServer 5.87
42 TestAddons/parallel/HelmTiller 11.01
44 TestAddons/parallel/CSI 59.01
45 TestAddons/parallel/Headlamp 13.58
46 TestAddons/parallel/CloudSpanner 5.7
47 TestAddons/parallel/LocalPath 54.05
48 TestAddons/parallel/NvidiaDevicePlugin 5.69
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.11
53 TestAddons/StoppedEnableDisable 11.73
54 TestCertOptions 24.68
55 TestCertExpiration 230.85
56 TestDockerFlags 26.55
57 TestForceSystemdFlag 26.86
58 TestForceSystemdEnv 25.34
61 TestHyperKitDriverInstallOrUpdate 9.27
64 TestErrorSpam/setup 22.33
65 TestErrorSpam/start 2.13
66 TestErrorSpam/status 1.29
67 TestErrorSpam/pause 1.76
68 TestErrorSpam/unpause 1.84
69 TestErrorSpam/stop 2.81
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 37.1
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 39.79
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.09
80 TestFunctional/serial/CacheCmd/cache/add_remote 10.46
81 TestFunctional/serial/CacheCmd/cache/add_local 1.84
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.09
83 TestFunctional/serial/CacheCmd/cache/list 0.09
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.44
85 TestFunctional/serial/CacheCmd/cache/cache_reload 3.47
86 TestFunctional/serial/CacheCmd/cache/delete 0.18
87 TestFunctional/serial/MinikubeKubectlCmd 1.13
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.64
89 TestFunctional/serial/ExtraConfig 41.13
90 TestFunctional/serial/ComponentHealth 0.08
91 TestFunctional/serial/LogsCmd 3.14
92 TestFunctional/serial/LogsFileCmd 3.1
93 TestFunctional/serial/InvalidService 4.36
95 TestFunctional/parallel/ConfigCmd 0.53
96 TestFunctional/parallel/DashboardCmd 11.07
97 TestFunctional/parallel/DryRun 1.45
98 TestFunctional/parallel/InternationalLanguage 0.68
99 TestFunctional/parallel/StatusCmd 1.27
104 TestFunctional/parallel/AddonsCmd 0.28
105 TestFunctional/parallel/PersistentVolumeClaim 27.51
107 TestFunctional/parallel/SSHCmd 0.86
108 TestFunctional/parallel/CpCmd 2.63
109 TestFunctional/parallel/MySQL 28.66
110 TestFunctional/parallel/FileSync 0.41
111 TestFunctional/parallel/CertSync 2.56
115 TestFunctional/parallel/NodeLabels 0.06
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.44
119 TestFunctional/parallel/License 1.42
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.57
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.19
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
131 TestFunctional/parallel/ServiceCmd/DeployApp 8.13
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.54
133 TestFunctional/parallel/ProfileCmd/profile_list 0.59
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.5
135 TestFunctional/parallel/MountCmd/any-port 12.52
136 TestFunctional/parallel/ServiceCmd/List 0.63
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.63
138 TestFunctional/parallel/ServiceCmd/HTTPS 15
139 TestFunctional/parallel/MountCmd/specific-port 2.4
140 TestFunctional/parallel/MountCmd/VerifyCleanup 2.43
141 TestFunctional/parallel/ServiceCmd/Format 15
142 TestFunctional/parallel/Version/short 0.12
143 TestFunctional/parallel/Version/components 0.98
144 TestFunctional/parallel/ImageCommands/ImageListShort 0.32
145 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
146 TestFunctional/parallel/ImageCommands/ImageListJson 0.32
147 TestFunctional/parallel/ImageCommands/ImageListYaml 0.32
148 TestFunctional/parallel/ImageCommands/ImageBuild 5.96
149 TestFunctional/parallel/ImageCommands/Setup 5.84
150 TestFunctional/parallel/ServiceCmd/URL 15
151 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.5
152 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.37
153 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 8.91
154 TestFunctional/parallel/DockerEnv/bash 1.64
155 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.23
156 TestFunctional/parallel/ImageCommands/ImageRemove 0.65
157 TestFunctional/parallel/UpdateContextCmd/no_changes 0.29
158 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.29
159 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.28
160 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.05
161 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.48
162 TestFunctional/delete_addon-resizer_images 0.13
163 TestFunctional/delete_my-image_image 0.05
164 TestFunctional/delete_minikube_cached_images 0.06
168 TestImageBuild/serial/Setup 21.75
169 TestImageBuild/serial/NormalBuild 9.67
170 TestImageBuild/serial/BuildWithBuildArg 1.4
171 TestImageBuild/serial/BuildWithDockerIgnore 1.46
172 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.06
182 TestJSONOutput/start/Command 75.86
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.69
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.57
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 10.76
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.78
207 TestKicCustomNetwork/create_custom_network 24.36
208 TestKicCustomNetwork/use_default_bridge_network 22.88
209 TestKicExistingNetwork 23.81
210 TestKicCustomSubnet 23.77
211 TestKicStaticIP 24.4
212 TestMainNoArgs 0.09
213 TestMinikubeProfile 50.93
216 TestMountStart/serial/StartWithMountFirst 7.8
217 TestMountStart/serial/VerifyMountFirst 0.39
218 TestMountStart/serial/StartWithMountSecond 7.81
219 TestMountStart/serial/VerifyMountSecond 0.39
220 TestMountStart/serial/DeleteFirst 2.08
221 TestMountStart/serial/VerifyMountPostDelete 0.39
222 TestMountStart/serial/Stop 1.56
223 TestMountStart/serial/RestartStopped 9.07
224 TestMountStart/serial/VerifyMountPostStop 0.39
227 TestMultiNode/serial/FreshStart2Nodes 64.22
228 TestMultiNode/serial/DeployApp2Nodes 41.92
229 TestMultiNode/serial/PingHostFrom2Pods 0.97
230 TestMultiNode/serial/AddNode 15.24
231 TestMultiNode/serial/MultiNodeLabels 0.07
232 TestMultiNode/serial/ProfileList 0.53
233 TestMultiNode/serial/CopyFile 14.45
234 TestMultiNode/serial/StopNode 2.98
235 TestMultiNode/serial/StartAfterStop 13.61
236 TestMultiNode/serial/RestartKeepsNodes 99.25
237 TestMultiNode/serial/DeleteNode 5.88
238 TestMultiNode/serial/StopMultiNode 21.82
239 TestMultiNode/serial/RestartMultiNode 62.05
240 TestMultiNode/serial/ValidateNameConflict 24.81
244 TestPreload 182.71
246 TestScheduledStopUnix 95.45
247 TestSkaffold 130.71
249 TestInsufficientStorage 10.61
250 TestRunningBinaryUpgrade 190
253 TestMissingContainerUpgrade 107.68
265 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 21.83
266 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 24.52
267 TestStoppedBinaryUpgrade/Setup 5.61
268 TestStoppedBinaryUpgrade/Upgrade 73.21
269 TestStoppedBinaryUpgrade/MinikubeLogs 3.03
271 TestPause/serial/Start 35.9
272 TestPause/serial/SecondStartNoReconfiguration 41.23
273 TestPause/serial/Pause 0.64
274 TestPause/serial/VerifyStatus 0.42
275 TestPause/serial/Unpause 0.72
276 TestPause/serial/PauseAgain 0.7
277 TestPause/serial/DeletePaused 2.58
278 TestPause/serial/VerifyDeletedResources 0.61
287 TestNoKubernetes/serial/StartNoK8sWithVersion 0.43
288 TestNoKubernetes/serial/StartWithK8s 23.92
289 TestNoKubernetes/serial/StartWithStopK8s 17.4
290 TestNoKubernetes/serial/Start 7.03
291 TestNoKubernetes/serial/VerifyK8sNotRunning 0.38
292 TestNoKubernetes/serial/ProfileList 26.34
293 TestNoKubernetes/serial/Stop 1.54
294 TestNoKubernetes/serial/StartNoArgs 7.96
295 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.42
296 TestNetworkPlugins/group/auto/Start 74.92
297 TestNetworkPlugins/group/kindnet/Start 51.04
298 TestNetworkPlugins/group/auto/KubeletFlags 0.39
299 TestNetworkPlugins/group/auto/NetCatPod 15.2
300 TestNetworkPlugins/group/auto/DNS 0.16
301 TestNetworkPlugins/group/auto/Localhost 0.13
302 TestNetworkPlugins/group/auto/HairPin 0.13
303 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
304 TestNetworkPlugins/group/kindnet/KubeletFlags 0.39
305 TestNetworkPlugins/group/kindnet/NetCatPod 15.2
306 TestNetworkPlugins/group/flannel/Start 49.14
307 TestNetworkPlugins/group/kindnet/DNS 0.14
308 TestNetworkPlugins/group/kindnet/Localhost 0.11
309 TestNetworkPlugins/group/kindnet/HairPin 0.12
310 TestNetworkPlugins/group/enable-default-cni/Start 37.87
311 TestNetworkPlugins/group/flannel/ControllerPod 6.01
312 TestNetworkPlugins/group/flannel/KubeletFlags 0.39
313 TestNetworkPlugins/group/flannel/NetCatPod 14.19
314 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.4
315 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.2
316 TestNetworkPlugins/group/flannel/DNS 0.17
317 TestNetworkPlugins/group/flannel/Localhost 0.12
318 TestNetworkPlugins/group/flannel/HairPin 0.13
319 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
320 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
321 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
322 TestNetworkPlugins/group/bridge/Start 39.25
323 TestNetworkPlugins/group/kubenet/Start 76.11
324 TestNetworkPlugins/group/bridge/KubeletFlags 0.43
325 TestNetworkPlugins/group/bridge/NetCatPod 15.2
326 TestNetworkPlugins/group/bridge/DNS 0.14
327 TestNetworkPlugins/group/bridge/Localhost 0.11
328 TestNetworkPlugins/group/bridge/HairPin 0.11
329 TestNetworkPlugins/group/custom-flannel/Start 49.96
330 TestNetworkPlugins/group/kubenet/KubeletFlags 0.4
331 TestNetworkPlugins/group/kubenet/NetCatPod 14.25
332 TestNetworkPlugins/group/kubenet/DNS 0.13
333 TestNetworkPlugins/group/kubenet/Localhost 0.13
334 TestNetworkPlugins/group/kubenet/HairPin 0.12
335 TestNetworkPlugins/group/calico/Start 164.38
336 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.63
337 TestNetworkPlugins/group/custom-flannel/NetCatPod 14.21
338 TestNetworkPlugins/group/custom-flannel/DNS 0.14
339 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
340 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
341 TestNetworkPlugins/group/false/Start 38.81
342 TestNetworkPlugins/group/false/KubeletFlags 0.43
343 TestNetworkPlugins/group/false/NetCatPod 14.2
344 TestNetworkPlugins/group/false/DNS 0.15
345 TestNetworkPlugins/group/false/Localhost 0.13
346 TestNetworkPlugins/group/false/HairPin 0.12
349 TestNetworkPlugins/group/calico/ControllerPod 6.01
350 TestNetworkPlugins/group/calico/KubeletFlags 0.4
351 TestNetworkPlugins/group/calico/NetCatPod 13.2
352 TestNetworkPlugins/group/calico/DNS 0.15
353 TestNetworkPlugins/group/calico/Localhost 0.11
354 TestNetworkPlugins/group/calico/HairPin 0.12
356 TestStartStop/group/no-preload/serial/FirstStart 80.93
357 TestStartStop/group/no-preload/serial/DeployApp 14.25
358 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.11
359 TestStartStop/group/no-preload/serial/Stop 11
360 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.44
361 TestStartStop/group/no-preload/serial/SecondStart 313.14
364 TestStartStop/group/old-k8s-version/serial/Stop 1.56
365 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.45
367 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.01
368 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
369 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.31
370 TestStartStop/group/no-preload/serial/Pause 3.07
372 TestStartStop/group/embed-certs/serial/FirstStart 75.12
373 TestStartStop/group/embed-certs/serial/DeployApp 14.24
374 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.25
375 TestStartStop/group/embed-certs/serial/Stop 10.97
376 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.45
377 TestStartStop/group/embed-certs/serial/SecondStart 312.3
379 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 11.01
380 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.14
381 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.31
382 TestStartStop/group/embed-certs/serial/Pause 3.19
384 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 37.17
385 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 13.27
386 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.17
387 TestStartStop/group/default-k8s-diff-port/serial/Stop 11
388 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.47
389 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 312.42
390 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 11
391 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
392 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.32
393 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.16
395 TestStartStop/group/newest-cni/serial/FirstStart 34.99
396 TestStartStop/group/newest-cni/serial/DeployApp 0
397 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.08
398 TestStartStop/group/newest-cni/serial/Stop 5.93
399 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.45
400 TestStartStop/group/newest-cni/serial/SecondStart 28.43
402 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
403 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
404 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
405 TestStartStop/group/newest-cni/serial/Pause 3.15
x
+
TestDownloadOnly/v1.16.0/json-events (24.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-597000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-597000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (24.055305645s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (24.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-597000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-597000: exit status 85 (345.190967ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-597000 | jenkins | v1.32.0 | 26 Feb 24 02:27 PST |          |
	|         | -p download-only-597000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/26 02:27:57
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0226 02:27:57.278800   10028 out.go:291] Setting OutFile to fd 1 ...
	I0226 02:27:57.278981   10028 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 02:27:57.278986   10028 out.go:304] Setting ErrFile to fd 2...
	I0226 02:27:57.278990   10028 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 02:27:57.279164   10028 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18222-9538/.minikube/bin
	W0226 02:27:57.279294   10028 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18222-9538/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18222-9538/.minikube/config/config.json: no such file or directory
	I0226 02:27:57.280968   10028 out.go:298] Setting JSON to true
	I0226 02:27:57.303621   10028 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":8848,"bootTime":1708934429,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0226 02:27:57.303712   10028 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0226 02:27:57.324921   10028 out.go:97] [download-only-597000] minikube v1.32.0 on Darwin 14.3.1
	I0226 02:27:57.344722   10028 out.go:169] MINIKUBE_LOCATION=18222
	I0226 02:27:57.325146   10028 notify.go:220] Checking for updates...
	W0226 02:27:57.325195   10028 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball: no such file or directory
	I0226 02:27:57.389995   10028 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18222-9538/kubeconfig
	I0226 02:27:57.410855   10028 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0226 02:27:57.452989   10028 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 02:27:57.473942   10028 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18222-9538/.minikube
	W0226 02:27:57.518076   10028 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0226 02:27:57.518620   10028 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 02:27:57.576915   10028 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0226 02:27:57.577059   10028 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 02:27:57.681858   10028 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:false NGoroutines:105 SystemTime:2024-02-26 10:27:57.665995268 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:24 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0226 02:27:57.703538   10028 out.go:97] Using the docker driver based on user configuration
	I0226 02:27:57.703631   10028 start.go:299] selected driver: docker
	I0226 02:27:57.703642   10028 start.go:903] validating driver "docker" against <nil>
	I0226 02:27:57.703846   10028 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 02:27:57.807974   10028 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:false NGoroutines:105 SystemTime:2024-02-26 10:27:57.795352808 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:24 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0226 02:27:57.808154   10028 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0226 02:27:57.811200   10028 start_flags.go:394] Using suggested 5877MB memory alloc based on sys=32768MB, container=5925MB
	I0226 02:27:57.811358   10028 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0226 02:27:57.833121   10028 out.go:169] Using Docker Desktop driver with root privileges
	I0226 02:27:57.854877   10028 cni.go:84] Creating CNI manager for ""
	I0226 02:27:57.854912   10028 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0226 02:27:57.854926   10028 start_flags.go:323] config:
	{Name:download-only-597000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:5877 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-597000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 02:27:57.875721   10028 out.go:97] Starting control plane node download-only-597000 in cluster download-only-597000
	I0226 02:27:57.875788   10028 cache.go:121] Beginning downloading kic base image for docker with docker
	I0226 02:27:57.896755   10028 out.go:97] Pulling base image v0.0.42-1708008208-17936 ...
	I0226 02:27:57.896806   10028 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0226 02:27:57.896845   10028 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0226 02:27:57.946092   10028 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf to local cache
	I0226 02:27:57.946329   10028 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory
	I0226 02:27:57.946462   10028 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf to local cache
	I0226 02:27:58.164253   10028 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0226 02:27:58.164298   10028 cache.go:56] Caching tarball of preloaded images
	I0226 02:27:58.164623   10028 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0226 02:27:58.186379   10028 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0226 02:27:58.186407   10028 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0226 02:27:58.797101   10028 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0226 02:28:17.072784   10028 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0226 02:28:17.072938   10028 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-597000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-597000
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (22.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-673000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-673000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker : (22.283908804s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (22.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-673000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-673000: exit status 85 (326.693765ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-597000 | jenkins | v1.32.0 | 26 Feb 24 02:27 PST |                     |
	|         | -p download-only-597000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 26 Feb 24 02:28 PST | 26 Feb 24 02:28 PST |
	| delete  | -p download-only-597000        | download-only-597000 | jenkins | v1.32.0 | 26 Feb 24 02:28 PST | 26 Feb 24 02:28 PST |
	| start   | -o=json --download-only        | download-only-673000 | jenkins | v1.32.0 | 26 Feb 24 02:28 PST |                     |
	|         | -p download-only-673000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/26 02:28:22
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0226 02:28:22.701458   10102 out.go:291] Setting OutFile to fd 1 ...
	I0226 02:28:22.701639   10102 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 02:28:22.701644   10102 out.go:304] Setting ErrFile to fd 2...
	I0226 02:28:22.701648   10102 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 02:28:22.701835   10102 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18222-9538/.minikube/bin
	I0226 02:28:22.703325   10102 out.go:298] Setting JSON to true
	I0226 02:28:22.726332   10102 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":8873,"bootTime":1708934429,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0226 02:28:22.726413   10102 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0226 02:28:22.747609   10102 out.go:97] [download-only-673000] minikube v1.32.0 on Darwin 14.3.1
	I0226 02:28:22.769444   10102 out.go:169] MINIKUBE_LOCATION=18222
	I0226 02:28:22.747785   10102 notify.go:220] Checking for updates...
	I0226 02:28:22.812959   10102 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18222-9538/kubeconfig
	I0226 02:28:22.834521   10102 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0226 02:28:22.876065   10102 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 02:28:22.897395   10102 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18222-9538/.minikube
	W0226 02:28:22.941000   10102 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0226 02:28:22.941377   10102 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 02:28:22.999347   10102 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0226 02:28:22.999505   10102 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 02:28:23.099383   10102 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:107 SystemTime:2024-02-26 10:28:23.089327922 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:25 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0226 02:28:23.120950   10102 out.go:97] Using the docker driver based on user configuration
	I0226 02:28:23.120971   10102 start.go:299] selected driver: docker
	I0226 02:28:23.120977   10102 start.go:903] validating driver "docker" against <nil>
	I0226 02:28:23.121090   10102 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 02:28:23.220161   10102 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:107 SystemTime:2024-02-26 10:28:23.20763586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:25 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=
cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker De
v Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) f
or an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0226 02:28:23.220318   10102 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0226 02:28:23.223195   10102 start_flags.go:394] Using suggested 5877MB memory alloc based on sys=32768MB, container=5925MB
	I0226 02:28:23.223339   10102 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0226 02:28:23.244894   10102 out.go:169] Using Docker Desktop driver with root privileges
	I0226 02:28:23.265747   10102 cni.go:84] Creating CNI manager for ""
	I0226 02:28:23.265771   10102 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0226 02:28:23.265786   10102 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0226 02:28:23.265797   10102 start_flags.go:323] config:
	{Name:download-only-673000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:5877 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-673000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 02:28:23.286914   10102 out.go:97] Starting control plane node download-only-673000 in cluster download-only-673000
	I0226 02:28:23.286956   10102 cache.go:121] Beginning downloading kic base image for docker with docker
	I0226 02:28:23.308981   10102 out.go:97] Pulling base image v0.0.42-1708008208-17936 ...
	I0226 02:28:23.309069   10102 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0226 02:28:23.309142   10102 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0226 02:28:23.360659   10102 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf to local cache
	I0226 02:28:23.360822   10102 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory
	I0226 02:28:23.360839   10102 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory, skipping pull
	I0226 02:28:23.360845   10102 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in cache, skipping pull
	I0226 02:28:23.360865   10102 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf as a tarball
	I0226 02:28:23.577247   10102 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0226 02:28:23.577276   10102 cache.go:56] Caching tarball of preloaded images
	I0226 02:28:23.577497   10102 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0226 02:28:23.599207   10102 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0226 02:28:23.599228   10102 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0226 02:28:24.203590   10102 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-673000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-673000
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (21.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-709000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-709000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker : (21.046319423s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (21.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-709000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-709000: exit status 85 (367.950386ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-597000 | jenkins | v1.32.0 | 26 Feb 24 02:27 PST |                     |
	|         | -p download-only-597000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 26 Feb 24 02:28 PST | 26 Feb 24 02:28 PST |
	| delete  | -p download-only-597000           | download-only-597000 | jenkins | v1.32.0 | 26 Feb 24 02:28 PST | 26 Feb 24 02:28 PST |
	| start   | -o=json --download-only           | download-only-673000 | jenkins | v1.32.0 | 26 Feb 24 02:28 PST |                     |
	|         | -p download-only-673000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 26 Feb 24 02:28 PST | 26 Feb 24 02:28 PST |
	| delete  | -p download-only-673000           | download-only-673000 | jenkins | v1.32.0 | 26 Feb 24 02:28 PST | 26 Feb 24 02:28 PST |
	| start   | -o=json --download-only           | download-only-709000 | jenkins | v1.32.0 | 26 Feb 24 02:28 PST |                     |
	|         | -p download-only-709000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/26 02:28:46
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0226 02:28:46.355371   10176 out.go:291] Setting OutFile to fd 1 ...
	I0226 02:28:46.355561   10176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 02:28:46.355566   10176 out.go:304] Setting ErrFile to fd 2...
	I0226 02:28:46.355570   10176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 02:28:46.355773   10176 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18222-9538/.minikube/bin
	I0226 02:28:46.357231   10176 out.go:298] Setting JSON to true
	I0226 02:28:46.379497   10176 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":8897,"bootTime":1708934429,"procs":426,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0226 02:28:46.379588   10176 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0226 02:28:46.401389   10176 out.go:97] [download-only-709000] minikube v1.32.0 on Darwin 14.3.1
	I0226 02:28:46.423093   10176 out.go:169] MINIKUBE_LOCATION=18222
	I0226 02:28:46.401655   10176 notify.go:220] Checking for updates...
	I0226 02:28:46.465433   10176 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18222-9538/kubeconfig
	I0226 02:28:46.507285   10176 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0226 02:28:46.528416   10176 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 02:28:46.550295   10176 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18222-9538/.minikube
	W0226 02:28:46.595323   10176 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0226 02:28:46.595813   10176 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 02:28:46.654518   10176 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0226 02:28:46.654667   10176 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 02:28:46.757526   10176 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:106 SystemTime:2024-02-26 10:28:46.745058708 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:25 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0226 02:28:46.778798   10176 out.go:97] Using the docker driver based on user configuration
	I0226 02:28:46.778837   10176 start.go:299] selected driver: docker
	I0226 02:28:46.778850   10176 start.go:903] validating driver "docker" against <nil>
	I0226 02:28:46.779055   10176 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 02:28:46.883295   10176 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:106 SystemTime:2024-02-26 10:28:46.871027855 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:25 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0226 02:28:46.883487   10176 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0226 02:28:46.886388   10176 start_flags.go:394] Using suggested 5877MB memory alloc based on sys=32768MB, container=5925MB
	I0226 02:28:46.886523   10176 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0226 02:28:46.907583   10176 out.go:169] Using Docker Desktop driver with root privileges
	I0226 02:28:46.929649   10176 cni.go:84] Creating CNI manager for ""
	I0226 02:28:46.929684   10176 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0226 02:28:46.929700   10176 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0226 02:28:46.929744   10176 start_flags.go:323] config:
	{Name:download-only-709000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:5877 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-709000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 02:28:46.951308   10176 out.go:97] Starting control plane node download-only-709000 in cluster download-only-709000
	I0226 02:28:46.951324   10176 cache.go:121] Beginning downloading kic base image for docker with docker
	I0226 02:28:46.973049   10176 out.go:97] Pulling base image v0.0.42-1708008208-17936 ...
	I0226 02:28:46.973113   10176 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0226 02:28:46.973173   10176 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0226 02:28:47.024639   10176 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf to local cache
	I0226 02:28:47.024850   10176 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory
	I0226 02:28:47.024870   10176 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory, skipping pull
	I0226 02:28:47.024877   10176 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in cache, skipping pull
	I0226 02:28:47.024884   10176 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf as a tarball
	I0226 02:28:47.245144   10176 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0226 02:28:47.245175   10176 cache.go:56] Caching tarball of preloaded images
	I0226 02:28:47.245524   10176 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0226 02:28:47.268506   10176 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0226 02:28:47.268528   10176 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0226 02:28:47.848637   10176 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:47acda482c3add5b56147c92b8d7f468 -> /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0226 02:29:05.418539   10176 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0226 02:29:05.418736   10176 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0226 02:29:05.985431   10176 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0226 02:29:05.985666   10176 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/download-only-709000/config.json ...
	I0226 02:29:05.985691   10176 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/download-only-709000/config.json: {Name:mka325bffa8d4561aaaf44ae5506aa5304aed12e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 02:29:05.987216   10176 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0226 02:29:05.987484   10176 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/darwin/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-709000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-709000
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.38s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.94s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-435000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-435000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-435000
--- PASS: TestDownloadOnlyKic (1.94s)

                                                
                                    
x
+
TestBinaryMirror (1.61s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-620000 --alsologtostderr --binary-mirror http://127.0.0.1:57216 --driver=docker 
aaa_download_only_test.go:314: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-620000 --alsologtostderr --binary-mirror http://127.0.0.1:57216 --driver=docker : (1.004031522s)
helpers_test.go:175: Cleaning up "binary-mirror-620000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-620000
--- PASS: TestBinaryMirror (1.61s)

                                                
                                    
x
+
TestOffline (41.54s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-584000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-584000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (39.081003677s)
helpers_test.go:175: Cleaning up "offline-docker-584000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-584000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-584000: (2.457367315s)
--- PASS: TestOffline (41.54s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.2s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-108000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-108000: exit status 85 (195.674086ms)

                                                
                                                
-- stdout --
	* Profile "addons-108000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-108000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.20s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.18s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-108000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-108000: exit status 85 (175.246803ms)

                                                
                                                
-- stdout --
	* Profile "addons-108000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-108000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.18s)

                                                
                                    
x
+
TestAddons/Setup (226.2s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-108000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-darwin-amd64 start -p addons-108000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m46.196833046s)
--- PASS: TestAddons/Setup (226.20s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.97s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-t9lns" [bb138b00-0a24-4501-8df9-a894b8460c28] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.006398156s
addons_test.go:841: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-108000
addons_test.go:841: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-108000: (5.966558415s)
--- PASS: TestAddons/parallel/InspektorGadget (10.97s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.87s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.868823ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-4kc7c" [523c5e14-a736-436a-a941-7d7391ff2bb6] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.007088088s
addons_test.go:415: (dbg) Run:  kubectl --context addons-108000 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-darwin-amd64 -p addons-108000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.87s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.01s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.382278ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-wqzfc" [8bf0e2a9-1da8-4627-813a-b8fd1d77ceab] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.007367648s
addons_test.go:473: (dbg) Run:  kubectl --context addons-108000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-108000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.197097696s)
addons_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 -p addons-108000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.01s)

                                                
                                    
x
+
TestAddons/parallel/CSI (59.01s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 17.229902ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-108000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-108000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-108000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-108000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-108000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-108000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-108000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-108000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-108000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-108000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-108000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-108000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-108000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-108000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-108000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [1060b8a5-dfa6-4e31-8328-f6fffccb6e05] Pending
helpers_test.go:344: "task-pv-pod" [1060b8a5-dfa6-4e31-8328-f6fffccb6e05] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [1060b8a5-dfa6-4e31-8328-f6fffccb6e05] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.005664783s
addons_test.go:584: (dbg) Run:  kubectl --context addons-108000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-108000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-108000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-108000 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-108000 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-108000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-108000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-108000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-108000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-108000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-108000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-108000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-108000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-108000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-108000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-108000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-108000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-108000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-108000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f967936d-5973-4189-b742-c4ca56fb906e] Pending
helpers_test.go:344: "task-pv-pod-restore" [f967936d-5973-4189-b742-c4ca56fb906e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f967936d-5973-4189-b742-c4ca56fb906e] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004433934s
addons_test.go:626: (dbg) Run:  kubectl --context addons-108000 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-108000 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-108000 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-darwin-amd64 -p addons-108000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-darwin-amd64 -p addons-108000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.050940415s)
addons_test.go:642: (dbg) Run:  out/minikube-darwin-amd64 -p addons-108000 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-darwin-amd64 -p addons-108000 addons disable volumesnapshots --alsologtostderr -v=1: (1.034502437s)
--- PASS: TestAddons/parallel/CSI (59.01s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-108000 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-108000 --alsologtostderr -v=1: (1.570641016s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-cjrb9" [d8cf29e9-f638-4187-a7cd-36309a1ea8c8] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-cjrb9" [d8cf29e9-f638-4187-a7cd-36309a1ea8c8] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.006102429s
--- PASS: TestAddons/parallel/Headlamp (13.58s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-7zhm5" [210c7cd6-2c09-4477-9644-8c75ed31807d] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006898909s
addons_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-108000
--- PASS: TestAddons/parallel/CloudSpanner (5.70s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.05s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-108000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-108000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-108000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-108000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-108000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-108000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-108000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-108000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [d42651c7-c77b-41b1-8346-fd773f2837b2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [d42651c7-c77b-41b1-8346-fd773f2837b2] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [d42651c7-c77b-41b1-8346-fd773f2837b2] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.005771047s
addons_test.go:891: (dbg) Run:  kubectl --context addons-108000 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-darwin-amd64 -p addons-108000 ssh "cat /opt/local-path-provisioner/pvc-0ee0fed3-ca87-4048-a346-e0208a8c617e_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-108000 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-108000 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-darwin-amd64 -p addons-108000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-darwin-amd64 -p addons-108000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.121548943s)
--- PASS: TestAddons/parallel/LocalPath (54.05s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.69s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-72rnz" [c96d6e29-a8ff-422f-868d-6873607a7443] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004766546s
addons_test.go:955: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-108000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.69s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-6r4w7" [df136fb6-a55a-4517-af64-bffec57bc127] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004914348s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-108000 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-108000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.73s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-108000
addons_test.go:172: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-108000: (10.982659302s)
addons_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-108000
addons_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-108000
addons_test.go:185: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-108000
--- PASS: TestAddons/StoppedEnableDisable (11.73s)

                                                
                                    
x
+
TestCertOptions (24.68s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-500000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-500000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (21.422919713s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-500000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-500000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-500000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-500000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-500000: (2.412605602s)
--- PASS: TestCertOptions (24.68s)

                                                
                                    
x
+
TestCertExpiration (230.85s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-562000 --memory=2048 --cert-expiration=3m --driver=docker 
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-562000 --memory=2048 --cert-expiration=3m --driver=docker : (21.900411113s)
E0226 03:08:32.505720   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-562000 --memory=2048 --cert-expiration=8760h --driver=docker 
E0226 03:11:35.629380   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/skaffold-341000/client.crt: no such file or directory
E0226 03:11:45.870331   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/skaffold-341000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-562000 --memory=2048 --cert-expiration=8760h --driver=docker : (26.455074379s)
helpers_test.go:175: Cleaning up "cert-expiration-562000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-562000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-562000: (2.49615903s)
--- PASS: TestCertExpiration (230.85s)

                                                
                                    
x
+
TestDockerFlags (26.55s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-981000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
docker_test.go:51: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-981000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (23.06547096s)
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-981000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-981000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-981000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-981000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-981000: (2.618696265s)
--- PASS: TestDockerFlags (26.55s)

                                                
                                    
x
+
TestForceSystemdFlag (26.86s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-415000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
E0226 03:07:59.907201   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-415000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (23.46908693s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-415000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-415000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-415000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-415000: (2.784626953s)
--- PASS: TestForceSystemdFlag (26.86s)

                                                
                                    
x
+
TestForceSystemdEnv (25.34s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-783000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
* Starting control plane node minikube in cluster minikube
* Download complete!
docker_test.go:155: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-783000 --memory=2048 --alsologtostderr -v=5 --driver=docker : (22.214439674s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-783000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-783000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-783000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-783000: (2.678229086s)
--- PASS: TestForceSystemdEnv (25.34s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.27s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.27s)

                                                
                                    
x
+
TestErrorSpam/setup (22.33s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-841000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-841000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-841000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-841000 --driver=docker : (22.32495651s)
--- PASS: TestErrorSpam/setup (22.33s)

                                                
                                    
x
+
TestErrorSpam/start (2.13s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-841000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-841000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-841000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-841000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-841000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-841000 start --dry-run
--- PASS: TestErrorSpam/start (2.13s)

                                                
                                    
x
+
TestErrorSpam/status (1.29s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-841000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-841000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-841000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-841000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-841000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-841000 status
--- PASS: TestErrorSpam/status (1.29s)

                                                
                                    
x
+
TestErrorSpam/pause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-841000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-841000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-841000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-841000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-841000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-841000 pause
--- PASS: TestErrorSpam/pause (1.76s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.84s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-841000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-841000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-841000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-841000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-841000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-841000 unpause
--- PASS: TestErrorSpam/unpause (1.84s)

                                                
                                    
x
+
TestErrorSpam/stop (2.81s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-841000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-841000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-841000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-841000 stop: (2.159747558s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-841000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-841000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-841000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-841000 stop
--- PASS: TestErrorSpam/stop (2.81s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/test/nested/copy/10026/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (37.1s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-349000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-349000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (37.102536465s)
--- PASS: TestFunctional/serial/StartWithProxy (37.10s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.79s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-349000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-349000 --alsologtostderr -v=8: (39.791927155s)
functional_test.go:659: soft start took 39.792589949s for "functional-349000" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.79s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-349000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (10.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-349000 cache add registry.k8s.io/pause:3.1: (4.155375488s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-349000 cache add registry.k8s.io/pause:3.3: (3.654501838s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-349000 cache add registry.k8s.io/pause:latest: (2.652961597s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (10.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-349000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialCacheCmdcacheadd_local3269488559/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 cache add minikube-local-cache-test:functional-349000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-349000 cache add minikube-local-cache-test:functional-349000: (1.080191432s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 cache delete minikube-local-cache-test:functional-349000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-349000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (3.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-349000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (415.99175ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-amd64 -p functional-349000 cache reload: (2.199871478s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (3.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 kubectl -- --context functional-349000 get pods
functional_test.go:712: (dbg) Done: out/minikube-darwin-amd64 -p functional-349000 kubectl -- --context functional-349000 get pods: (1.127454055s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-349000 get pods
functional_test.go:737: (dbg) Done: out/kubectl --context functional-349000 get pods: (1.639239981s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.64s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.13s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-349000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0226 02:37:59.878111   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
E0226 02:37:59.887159   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
E0226 02:37:59.898729   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
E0226 02:37:59.920227   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
E0226 02:37:59.960416   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
E0226 02:38:00.040641   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
E0226 02:38:00.200842   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
E0226 02:38:00.521566   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
E0226 02:38:01.161728   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
E0226 02:38:02.441833   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
E0226 02:38:05.002245   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
E0226 02:38:10.122859   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-349000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.133201393s)
functional_test.go:757: restart took 41.133322925s for "functional-349000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.13s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-349000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.14s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 logs
E0226 02:38:20.363668   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-349000 logs: (3.13893749s)
--- PASS: TestFunctional/serial/LogsCmd (3.14s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd2937353184/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-349000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd2937353184/001/logs.txt: (3.099033233s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.10s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.36s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-349000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-349000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-349000: exit status 115 (601.229507ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31041 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-349000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.36s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-349000 config get cpus: exit status 14 (66.067059ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-349000 config get cpus: exit status 14 (65.143435ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-349000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-349000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 12366: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.07s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-349000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
E0226 02:39:21.804042   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-349000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (721.092711ms)

                                                
                                                
-- stdout --
	* [functional-349000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18222
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18222-9538/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18222-9538/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0226 02:39:21.621998   12289 out.go:291] Setting OutFile to fd 1 ...
	I0226 02:39:21.622257   12289 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 02:39:21.622262   12289 out.go:304] Setting ErrFile to fd 2...
	I0226 02:39:21.622266   12289 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 02:39:21.622465   12289 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18222-9538/.minikube/bin
	I0226 02:39:21.623862   12289 out.go:298] Setting JSON to false
	I0226 02:39:21.646009   12289 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":9532,"bootTime":1708934429,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0226 02:39:21.646115   12289 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0226 02:39:21.669454   12289 out.go:177] * [functional-349000] minikube v1.32.0 on Darwin 14.3.1
	I0226 02:39:21.711474   12289 out.go:177]   - MINIKUBE_LOCATION=18222
	I0226 02:39:21.711499   12289 notify.go:220] Checking for updates...
	I0226 02:39:21.754213   12289 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18222-9538/kubeconfig
	I0226 02:39:21.812370   12289 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0226 02:39:21.886355   12289 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 02:39:21.928309   12289 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18222-9538/.minikube
	I0226 02:39:21.949328   12289 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0226 02:39:21.970783   12289 config.go:182] Loaded profile config "functional-349000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0226 02:39:21.971207   12289 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 02:39:22.026869   12289 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0226 02:39:22.027041   12289 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 02:39:22.129911   12289 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:76 OomKillDisable:false NGoroutines:116 SystemTime:2024-02-26 10:39:22.119061524 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0226 02:39:22.172595   12289 out.go:177] * Using the docker driver based on existing profile
	I0226 02:39:22.193437   12289 start.go:299] selected driver: docker
	I0226 02:39:22.193456   12289 start.go:903] validating driver "docker" against &{Name:functional-349000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-349000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 02:39:22.193582   12289 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0226 02:39:22.218247   12289 out.go:177] 
	W0226 02:39:22.239434   12289 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0226 02:39:22.260232   12289 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-349000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-349000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-349000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (676.097367ms)

                                                
                                                
-- stdout --
	* [functional-349000] minikube v1.32.0 sur Darwin 14.3.1
	  - MINIKUBE_LOCATION=18222
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18222-9538/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18222-9538/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0226 02:39:20.940589   12271 out.go:291] Setting OutFile to fd 1 ...
	I0226 02:39:20.940756   12271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 02:39:20.940762   12271 out.go:304] Setting ErrFile to fd 2...
	I0226 02:39:20.940766   12271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 02:39:20.940976   12271 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18222-9538/.minikube/bin
	I0226 02:39:20.942960   12271 out.go:298] Setting JSON to false
	I0226 02:39:20.969600   12271 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":9531,"bootTime":1708934429,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0226 02:39:20.969705   12271 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0226 02:39:20.991513   12271 out.go:177] * [functional-349000] minikube v1.32.0 sur Darwin 14.3.1
	I0226 02:39:21.055172   12271 out.go:177]   - MINIKUBE_LOCATION=18222
	I0226 02:39:21.033425   12271 notify.go:220] Checking for updates...
	I0226 02:39:21.097715   12271 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18222-9538/kubeconfig
	I0226 02:39:21.118186   12271 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0226 02:39:21.139271   12271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 02:39:21.160312   12271 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18222-9538/.minikube
	I0226 02:39:21.181626   12271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0226 02:39:21.202627   12271 config.go:182] Loaded profile config "functional-349000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0226 02:39:21.203021   12271 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 02:39:21.258008   12271 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0226 02:39:21.258171   12271 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 02:39:21.366988   12271 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:76 OomKillDisable:false NGoroutines:116 SystemTime:2024-02-26 10:39:21.35667382 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=
cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker De
v Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) f
or an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0226 02:39:21.388668   12271 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0226 02:39:21.446839   12271 start.go:299] selected driver: docker
	I0226 02:39:21.446861   12271 start.go:903] validating driver "docker" against &{Name:functional-349000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-349000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 02:39:21.446999   12271 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0226 02:39:21.473389   12271 out.go:177] 
	W0226 02:39:21.496445   12271 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0226 02:39:21.517217   12271 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [89e66c53-ab91-45a1-ab06-2beb41d10d35] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003659483s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-349000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-349000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-349000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-349000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7ea5c2c1-3a0d-4534-8fe7-422326cc73c0] Pending
helpers_test.go:344: "sp-pod" [7ea5c2c1-3a0d-4534-8fe7-422326cc73c0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0226 02:38:40.843821   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [7ea5c2c1-3a0d-4534-8fe7-422326cc73c0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004005304s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-349000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-349000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-349000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [73683ce8-1d67-41cd-8da8-a5cb6231c741] Pending
helpers_test.go:344: "sp-pod" [73683ce8-1d67-41cd-8da8-a5cb6231c741] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [73683ce8-1d67-41cd-8da8-a5cb6231c741] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003067251s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-349000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.51s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 ssh -n functional-349000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 cp functional-349000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelCpCmd1044597265/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 ssh -n functional-349000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 ssh -n functional-349000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.63s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-349000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-8z5sm" [c0e5f790-9e19-46b4-81f2-70425dc467c6] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-8z5sm" [c0e5f790-9e19-46b4-81f2-70425dc467c6] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.022153383s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-349000 exec mysql-859648c796-8z5sm -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-349000 exec mysql-859648c796-8z5sm -- mysql -ppassword -e "show databases;": exit status 1 (127.496643ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-349000 exec mysql-859648c796-8z5sm -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-349000 exec mysql-859648c796-8z5sm -- mysql -ppassword -e "show databases;": exit status 1 (119.555293ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-349000 exec mysql-859648c796-8z5sm -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-349000 exec mysql-859648c796-8z5sm -- mysql -ppassword -e "show databases;": exit status 1 (148.799433ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-349000 exec mysql-859648c796-8z5sm -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.66s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/10026/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 ssh "sudo cat /etc/test/nested/copy/10026/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/10026.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 ssh "sudo cat /etc/ssl/certs/10026.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/10026.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 ssh "sudo cat /usr/share/ca-certificates/10026.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/100262.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 ssh "sudo cat /etc/ssl/certs/100262.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/100262.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 ssh "sudo cat /usr/share/ca-certificates/100262.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-349000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-349000 ssh "sudo systemctl is-active crio": exit status 1 (438.96662ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
functional_test.go:2284: (dbg) Done: out/minikube-darwin-amd64 license: (1.421462254s)
--- PASS: TestFunctional/parallel/License (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-349000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-349000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-349000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-349000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 11818: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-349000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-349000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [badb7176-d507-4445-82fe-70c6932814f3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [badb7176-d507-4445-82fe-70c6932814f3] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003807283s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-349000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-349000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 11871: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-349000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-349000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-pmtsm" [2bf1142e-948d-4af9-bb55-74cdd8595e51] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-pmtsm" [2bf1142e-948d-4af9-bb55-74cdd8595e51] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.006116416s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "499.166317ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "86.775507ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "418.165216ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "85.498061ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-349000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3817496085/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1708943942270324000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3817496085/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1708943942270324000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3817496085/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1708943942270324000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3817496085/001/test-1708943942270324000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-349000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (383.969366ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 26 10:39 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 26 10:39 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 26 10:39 test-1708943942270324000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 ssh cat /mount-9p/test-1708943942270324000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-349000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [32d4f476-4eda-4b99-80c7-c026817c45ab] Pending
helpers_test.go:344: "busybox-mount" [32d4f476-4eda-4b99-80c7-c026817c45ab] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [32d4f476-4eda-4b99-80c7-c026817c45ab] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [32d4f476-4eda-4b99-80c7-c026817c45ab] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.003157255s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-349000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-349000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3817496085/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (12.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 service list -o json
functional_test.go:1490: Took "625.190856ms" to run "out/minikube-darwin-amd64 -p functional-349000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-349000 service --namespace=default --https --url hello-node: signal: killed (15.00360103s)

                                                
                                                
-- stdout --
	https://127.0.0.1:58171

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1518: found endpoint: https://127.0.0.1:58171
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-349000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port4153663478/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-349000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (381.427337ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-349000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port4153663478/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-349000 ssh "sudo umount -f /mount-9p": exit status 1 (373.772429ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-349000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-349000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port4153663478/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-349000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2511481319/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-349000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2511481319/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-349000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2511481319/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-349000 ssh "findmnt -T" /mount1: exit status 1 (471.769478ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-349000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-349000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2511481319/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-349000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2511481319/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-349000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2511481319/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 service hello-node --url --format={{.IP}}
2024/02/26 02:39:33 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-349000 service hello-node --url --format={{.IP}}: signal: killed (15.002561722s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-349000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-349000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-349000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-349000 image ls --format short --alsologtostderr:
I0226 02:40:04.001534   12689 out.go:291] Setting OutFile to fd 1 ...
I0226 02:40:04.001871   12689 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0226 02:40:04.001877   12689 out.go:304] Setting ErrFile to fd 2...
I0226 02:40:04.001881   12689 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0226 02:40:04.002077   12689 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18222-9538/.minikube/bin
I0226 02:40:04.002827   12689 config.go:182] Loaded profile config "functional-349000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0226 02:40:04.002925   12689 config.go:182] Loaded profile config "functional-349000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0226 02:40:04.003356   12689 cli_runner.go:164] Run: docker container inspect functional-349000 --format={{.State.Status}}
I0226 02:40:04.057038   12689 ssh_runner.go:195] Run: systemctl --version
I0226 02:40:04.057122   12689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-349000
I0226 02:40:04.110054   12689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57914 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/functional-349000/id_rsa Username:docker}
I0226 02:40:04.203161   12689 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-349000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| gcr.io/google-containers/addon-resizer      | functional-349000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/minikube-local-cache-test | functional-349000 | c6c301f81eb15 | 30B    |
| docker.io/library/nginx                     | latest            | e4720093a3c13 | 187MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | alpine            | 6913ed9ec8d00 | 42.6MB |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/localhost/my-image                | functional-349000 | 218057e666e33 | 1.24MB |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-349000 image ls --format table --alsologtostderr:
I0226 02:40:10.914457   12729 out.go:291] Setting OutFile to fd 1 ...
I0226 02:40:10.915244   12729 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0226 02:40:10.915254   12729 out.go:304] Setting ErrFile to fd 2...
I0226 02:40:10.915261   12729 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0226 02:40:10.915966   12729 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18222-9538/.minikube/bin
I0226 02:40:10.916586   12729 config.go:182] Loaded profile config "functional-349000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0226 02:40:10.916678   12729 config.go:182] Loaded profile config "functional-349000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0226 02:40:10.917038   12729 cli_runner.go:164] Run: docker container inspect functional-349000 --format={{.State.Status}}
I0226 02:40:10.976084   12729 ssh_runner.go:195] Run: systemctl --version
I0226 02:40:10.976158   12729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-349000
I0226 02:40:11.031015   12729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57914 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/functional-349000/id_rsa Username:docker}
I0226 02:40:11.121359   12729 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-349000 image ls --format json --alsologtostderr:
[{"id":"c6c301f81eb155ebd8ad43875e737314161cf31656624f49ed8ea2618a8a15aa","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-349000"],"size":"30"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d9
00bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7","repoDigests":[
],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-349000"],"size":"32900000"},{"id":"218057e666e33c220917a77085a8bb6ca35e712d5b5195211f34ad07fc8724fc","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-349000"],"size":"1240000"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/
kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-349000 image ls --format json --alsologtostderr:
I0226 02:40:10.599152   12723 out.go:291] Setting OutFile to fd 1 ...
I0226 02:40:10.599438   12723 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0226 02:40:10.599444   12723 out.go:304] Setting ErrFile to fd 2...
I0226 02:40:10.599448   12723 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0226 02:40:10.600215   12723 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18222-9538/.minikube/bin
I0226 02:40:10.601268   12723 config.go:182] Loaded profile config "functional-349000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0226 02:40:10.601368   12723 config.go:182] Loaded profile config "functional-349000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0226 02:40:10.601733   12723 cli_runner.go:164] Run: docker container inspect functional-349000 --format={{.State.Status}}
I0226 02:40:10.654029   12723 ssh_runner.go:195] Run: systemctl --version
I0226 02:40:10.654108   12723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-349000
I0226 02:40:10.707719   12723 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57914 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/functional-349000/id_rsa Username:docker}
I0226 02:40:10.800311   12723 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-349000 image ls --format yaml --alsologtostderr:
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-349000
size: "32900000"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: c6c301f81eb155ebd8ad43875e737314161cf31656624f49ed8ea2618a8a15aa
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-349000
size: "30"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-349000 image ls --format yaml --alsologtostderr:
I0226 02:40:04.321046   12695 out.go:291] Setting OutFile to fd 1 ...
I0226 02:40:04.321241   12695 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0226 02:40:04.321246   12695 out.go:304] Setting ErrFile to fd 2...
I0226 02:40:04.321250   12695 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0226 02:40:04.321491   12695 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18222-9538/.minikube/bin
I0226 02:40:04.322107   12695 config.go:182] Loaded profile config "functional-349000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0226 02:40:04.322216   12695 config.go:182] Loaded profile config "functional-349000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0226 02:40:04.322625   12695 cli_runner.go:164] Run: docker container inspect functional-349000 --format={{.State.Status}}
I0226 02:40:04.375268   12695 ssh_runner.go:195] Run: systemctl --version
I0226 02:40:04.375354   12695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-349000
I0226 02:40:04.429236   12695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57914 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/functional-349000/id_rsa Username:docker}
I0226 02:40:04.522402   12695 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-349000 ssh pgrep buildkitd: exit status 1 (381.640213ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 image build -t localhost/my-image:functional-349000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-349000 image build -t localhost/my-image:functional-349000 testdata/build --alsologtostderr: (5.263282705s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-349000 image build -t localhost/my-image:functional-349000 testdata/build --alsologtostderr:
I0226 02:40:05.017456   12711 out.go:291] Setting OutFile to fd 1 ...
I0226 02:40:05.017854   12711 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0226 02:40:05.017860   12711 out.go:304] Setting ErrFile to fd 2...
I0226 02:40:05.017863   12711 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0226 02:40:05.018035   12711 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18222-9538/.minikube/bin
I0226 02:40:05.018643   12711 config.go:182] Loaded profile config "functional-349000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0226 02:40:05.020262   12711 config.go:182] Loaded profile config "functional-349000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0226 02:40:05.020783   12711 cli_runner.go:164] Run: docker container inspect functional-349000 --format={{.State.Status}}
I0226 02:40:05.075404   12711 ssh_runner.go:195] Run: systemctl --version
I0226 02:40:05.075492   12711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-349000
I0226 02:40:05.128256   12711 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57914 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/functional-349000/id_rsa Username:docker}
I0226 02:40:05.220405   12711 build_images.go:151] Building image from path: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.3422941405.tar
I0226 02:40:05.220646   12711 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0226 02:40:05.237208   12711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3422941405.tar
I0226 02:40:05.241808   12711 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3422941405.tar: stat -c "%s %y" /var/lib/minikube/build/build.3422941405.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3422941405.tar': No such file or directory
I0226 02:40:05.241856   12711 ssh_runner.go:362] scp /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.3422941405.tar --> /var/lib/minikube/build/build.3422941405.tar (3072 bytes)
I0226 02:40:05.285082   12711 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3422941405
I0226 02:40:05.301079   12711 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3422941405 -xf /var/lib/minikube/build/build.3422941405.tar
I0226 02:40:05.317363   12711 docker.go:360] Building image: /var/lib/minikube/build/build.3422941405
I0226 02:40:05.317471   12711 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-349000 /var/lib/minikube/build/build.3422941405
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 2.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.4s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 1.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:218057e666e33c220917a77085a8bb6ca35e712d5b5195211f34ad07fc8724fc done
#8 naming to localhost/my-image:functional-349000 done
#8 DONE 0.0s
I0226 02:40:10.151477   12711 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-349000 /var/lib/minikube/build/build.3422941405: (4.833981119s)
I0226 02:40:10.151549   12711 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3422941405
I0226 02:40:10.168510   12711 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3422941405.tar
I0226 02:40:10.185439   12711 build_images.go:207] Built localhost/my-image:functional-349000 from /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.3422941405.tar
I0226 02:40:10.185481   12711 build_images.go:123] succeeded building to: functional-349000
I0226 02:40:10.185489   12711 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (5.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (5.77994399s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-349000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (5.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-349000 service hello-node --url: signal: killed (15.003666542s)

                                                
                                                
-- stdout --
	http://127.0.0.1:58282

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1561: found endpoint for hello-node: http://127.0.0.1:58282
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 image load --daemon gcr.io/google-containers/addon-resizer:functional-349000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-349000 image load --daemon gcr.io/google-containers/addon-resizer:functional-349000 --alsologtostderr: (3.2030629s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 image load --daemon gcr.io/google-containers/addon-resizer:functional-349000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-349000 image load --daemon gcr.io/google-containers/addon-resizer:functional-349000 --alsologtostderr: (2.066332284s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (5.355696393s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-349000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 image load --daemon gcr.io/google-containers/addon-resizer:functional-349000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-349000 image load --daemon gcr.io/google-containers/addon-resizer:functional-349000 --alsologtostderr: (3.128752575s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.91s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-349000 docker-env) && out/minikube-darwin-amd64 status -p functional-349000"
functional_test.go:495: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-349000 docker-env) && out/minikube-darwin-amd64 status -p functional-349000": (1.014355902s)
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-349000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 image save gcr.io/google-containers/addon-resizer:functional-349000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-349000 image save gcr.io/google-containers/addon-resizer:functional-349000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.233884757s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 image rm gcr.io/google-containers/addon-resizer:functional-349000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-349000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.734540823s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-349000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-349000 image save --daemon gcr.io/google-containers/addon-resizer:functional-349000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-349000 image save --daemon gcr.io/google-containers/addon-resizer:functional-349000 --alsologtostderr: (1.364655307s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-349000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.48s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-349000
--- PASS: TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-349000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-349000
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (21.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-237000 --driver=docker 
E0226 02:40:43.724261   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-237000 --driver=docker : (21.751632633s)
--- PASS: TestImageBuild/serial/Setup (21.75s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (9.67s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-237000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-237000: (9.673012038s)
--- PASS: TestImageBuild/serial/NormalBuild (9.67s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.4s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-237000
image_test.go:99: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-237000: (1.396991408s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.40s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.46s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-237000
image_test.go:133: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-237000: (1.454977573s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.46s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.06s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-237000
image_test.go:88: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-237000: (1.058985351s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.06s)

                                                
                                    
x
+
TestJSONOutput/start/Command (75.86s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-826000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E0226 02:49:00.157212   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-826000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (1m15.857367274s)
--- PASS: TestJSONOutput/start/Command (75.86s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-826000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-826000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.76s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-826000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-826000 --output=json --user=testUser: (10.763526405s)
--- PASS: TestJSONOutput/stop/Command (10.76s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.78s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-996000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-996000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (394.147029ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"38ceb0aa-ea60-43e6-9a7f-bf6c52762164","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-996000] minikube v1.32.0 on Darwin 14.3.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4e9aea8e-92c4-4ef8-b0d3-332bd6c9a93b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18222"}}
	{"specversion":"1.0","id":"e421d6e9-f732-483b-a511-32444b59c29f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18222-9538/kubeconfig"}}
	{"specversion":"1.0","id":"5e2d5c9b-3ec7-475d-8ca8-82f44640d1a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"33e4dc11-f0f6-4e67-a6c6-7641a6bf3f4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"721e3abd-849c-43d7-befa-c38e1c414225","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18222-9538/.minikube"}}
	{"specversion":"1.0","id":"bf8ba6cc-d70a-4048-873a-a71e144f65d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"835ffe02-93b0-4f6e-821a-48d605dc3ae4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-996000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-996000
--- PASS: TestErrorJSONOutput (0.78s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (24.36s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-926000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-926000 --network=: (21.887223995s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-926000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-926000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-926000: (2.419625571s)
--- PASS: TestKicCustomNetwork/create_custom_network (24.36s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.88s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-134000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-134000 --network=bridge: (20.590300221s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-134000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-134000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-134000: (2.233570297s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.88s)

                                                
                                    
x
+
TestKicExistingNetwork (23.81s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-073000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-073000 --network=existing-network: (21.210156367s)
helpers_test.go:175: Cleaning up "existing-network-073000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-073000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-073000: (2.260545615s)
--- PASS: TestKicExistingNetwork (23.81s)

                                                
                                    
x
+
TestKicCustomSubnet (23.77s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-985000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-985000 --subnet=192.168.60.0/24: (21.313867978s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-985000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-985000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-985000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-985000: (2.403093282s)
--- PASS: TestKicCustomSubnet (23.77s)

                                                
                                    
x
+
TestKicStaticIP (24.4s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-192000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-192000 --static-ip=192.168.200.200: (21.755735343s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-192000 ip
helpers_test.go:175: Cleaning up "static-ip-192000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-192000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-192000: (2.405708366s)
--- PASS: TestKicStaticIP (24.40s)

                                                
                                    
x
+
TestMainNoArgs (0.09s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.09s)

                                                
                                    
x
+
TestMinikubeProfile (50.93s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-921000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-921000 --driver=docker : (21.8834531s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-924000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-924000 --driver=docker : (22.331089201s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-921000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
E0226 02:52:59.877090   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-924000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-924000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-924000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-924000: (2.504021446s)
helpers_test.go:175: Cleaning up "first-921000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-921000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-921000: (2.436614188s)
--- PASS: TestMinikubeProfile (50.93s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.8s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-172000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-172000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.79770046s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-172000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.81s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-185000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-185000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.808278544s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-185000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.08s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-172000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-172000 --alsologtostderr -v=5: (2.079941204s)
--- PASS: TestMountStart/serial/DeleteFirst (2.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-185000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.56s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-185000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-185000: (1.564561629s)
--- PASS: TestMountStart/serial/Stop (1.56s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (9.07s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-185000
E0226 02:53:32.475434   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-185000: (8.066018185s)
--- PASS: TestMountStart/serial/RestartStopped (9.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-185000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (64.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-284000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0226 02:54:22.924808   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-284000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m3.512542124s)
multinode_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (64.22s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (41.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-284000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-284000 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-284000 -- rollout status deployment/busybox: (6.860256936s)
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-284000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-284000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-284000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-284000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-284000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-284000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-284000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-284000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-284000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-284000 -- exec busybox-5b5d89c9d6-6rgrw -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-284000 -- exec busybox-5b5d89c9d6-q5cgf -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-284000 -- exec busybox-5b5d89c9d6-6rgrw -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-284000 -- exec busybox-5b5d89c9d6-q5cgf -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-284000 -- exec busybox-5b5d89c9d6-6rgrw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-284000 -- exec busybox-5b5d89c9d6-q5cgf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (41.92s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-284000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-284000 -- exec busybox-5b5d89c9d6-6rgrw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-284000 -- exec busybox-5b5d89c9d6-6rgrw -- sh -c "ping -c 1 192.168.65.254"
multinode_test.go:588: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-284000 -- exec busybox-5b5d89c9d6-q5cgf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-284000 -- exec busybox-5b5d89c9d6-q5cgf -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (15.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-284000 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-284000 -v 3 --alsologtostderr: (14.22357431s)
multinode_test.go:117: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 status --alsologtostderr
multinode_test.go:117: (dbg) Done: out/minikube-darwin-amd64 -p multinode-284000 status --alsologtostderr: (1.017009638s)
--- PASS: TestMultiNode/serial/AddNode (15.24s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-284000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.53s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (14.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 cp testdata/cp-test.txt multinode-284000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 ssh -n multinode-284000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 cp multinode-284000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile3080493384/001/cp-test_multinode-284000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 ssh -n multinode-284000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 cp multinode-284000:/home/docker/cp-test.txt multinode-284000-m02:/home/docker/cp-test_multinode-284000_multinode-284000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 ssh -n multinode-284000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 ssh -n multinode-284000-m02 "sudo cat /home/docker/cp-test_multinode-284000_multinode-284000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 cp multinode-284000:/home/docker/cp-test.txt multinode-284000-m03:/home/docker/cp-test_multinode-284000_multinode-284000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 ssh -n multinode-284000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 ssh -n multinode-284000-m03 "sudo cat /home/docker/cp-test_multinode-284000_multinode-284000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 cp testdata/cp-test.txt multinode-284000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 ssh -n multinode-284000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 cp multinode-284000-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile3080493384/001/cp-test_multinode-284000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 ssh -n multinode-284000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 cp multinode-284000-m02:/home/docker/cp-test.txt multinode-284000:/home/docker/cp-test_multinode-284000-m02_multinode-284000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 ssh -n multinode-284000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 ssh -n multinode-284000 "sudo cat /home/docker/cp-test_multinode-284000-m02_multinode-284000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 cp multinode-284000-m02:/home/docker/cp-test.txt multinode-284000-m03:/home/docker/cp-test_multinode-284000-m02_multinode-284000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 ssh -n multinode-284000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 ssh -n multinode-284000-m03 "sudo cat /home/docker/cp-test_multinode-284000-m02_multinode-284000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 cp testdata/cp-test.txt multinode-284000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 ssh -n multinode-284000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 cp multinode-284000-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile3080493384/001/cp-test_multinode-284000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 ssh -n multinode-284000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 cp multinode-284000-m03:/home/docker/cp-test.txt multinode-284000:/home/docker/cp-test_multinode-284000-m03_multinode-284000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 ssh -n multinode-284000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 ssh -n multinode-284000 "sudo cat /home/docker/cp-test_multinode-284000-m03_multinode-284000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 cp multinode-284000-m03:/home/docker/cp-test.txt multinode-284000-m02:/home/docker/cp-test_multinode-284000-m03_multinode-284000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 ssh -n multinode-284000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 ssh -n multinode-284000-m02 "sudo cat /home/docker/cp-test_multinode-284000-m03_multinode-284000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (14.45s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-darwin-amd64 -p multinode-284000 node stop m03: (1.493367879s)
multinode_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-284000 status: exit status 7 (739.240722ms)

                                                
                                                
-- stdout --
	multinode-284000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-284000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-284000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-284000 status --alsologtostderr: exit status 7 (747.610359ms)

                                                
                                                
-- stdout --
	multinode-284000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-284000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-284000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0226 02:55:57.965927   15848 out.go:291] Setting OutFile to fd 1 ...
	I0226 02:55:57.966097   15848 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 02:55:57.966102   15848 out.go:304] Setting ErrFile to fd 2...
	I0226 02:55:57.966107   15848 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 02:55:57.966309   15848 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18222-9538/.minikube/bin
	I0226 02:55:57.966488   15848 out.go:298] Setting JSON to false
	I0226 02:55:57.966509   15848 mustload.go:65] Loading cluster: multinode-284000
	I0226 02:55:57.966554   15848 notify.go:220] Checking for updates...
	I0226 02:55:57.966818   15848 config.go:182] Loaded profile config "multinode-284000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0226 02:55:57.966832   15848 status.go:255] checking status of multinode-284000 ...
	I0226 02:55:57.967207   15848 cli_runner.go:164] Run: docker container inspect multinode-284000 --format={{.State.Status}}
	I0226 02:55:58.017389   15848 status.go:330] multinode-284000 host status = "Running" (err=<nil>)
	I0226 02:55:58.017435   15848 host.go:66] Checking if "multinode-284000" exists ...
	I0226 02:55:58.017691   15848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-284000
	I0226 02:55:58.067258   15848 host.go:66] Checking if "multinode-284000" exists ...
	I0226 02:55:58.067559   15848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0226 02:55:58.067626   15848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-284000
	I0226 02:55:58.117690   15848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58752 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/multinode-284000/id_rsa Username:docker}
	I0226 02:55:58.212460   15848 ssh_runner.go:195] Run: systemctl --version
	I0226 02:55:58.217354   15848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 02:55:58.235179   15848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-284000
	I0226 02:55:58.288312   15848 kubeconfig.go:92] found "multinode-284000" server: "https://127.0.0.1:58751"
	I0226 02:55:58.288344   15848 api_server.go:166] Checking apiserver status ...
	I0226 02:55:58.288380   15848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 02:55:58.305330   15848 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2291/cgroup
	W0226 02:55:58.320659   15848 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2291/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0226 02:55:58.320720   15848 ssh_runner.go:195] Run: ls
	I0226 02:55:58.324812   15848 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58751/healthz ...
	I0226 02:55:58.330407   15848 api_server.go:279] https://127.0.0.1:58751/healthz returned 200:
	ok
	I0226 02:55:58.330419   15848 status.go:421] multinode-284000 apiserver status = Running (err=<nil>)
	I0226 02:55:58.330430   15848 status.go:257] multinode-284000 status: &{Name:multinode-284000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0226 02:55:58.330440   15848 status.go:255] checking status of multinode-284000-m02 ...
	I0226 02:55:58.330710   15848 cli_runner.go:164] Run: docker container inspect multinode-284000-m02 --format={{.State.Status}}
	I0226 02:55:58.381654   15848 status.go:330] multinode-284000-m02 host status = "Running" (err=<nil>)
	I0226 02:55:58.381693   15848 host.go:66] Checking if "multinode-284000-m02" exists ...
	I0226 02:55:58.381975   15848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-284000-m02
	I0226 02:55:58.433747   15848 host.go:66] Checking if "multinode-284000-m02" exists ...
	I0226 02:55:58.434033   15848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0226 02:55:58.434128   15848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-284000-m02
	I0226 02:55:58.485031   15848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58791 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/multinode-284000-m02/id_rsa Username:docker}
	I0226 02:55:58.579673   15848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 02:55:58.596643   15848 status.go:257] multinode-284000-m02 status: &{Name:multinode-284000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0226 02:55:58.596662   15848 status.go:255] checking status of multinode-284000-m03 ...
	I0226 02:55:58.596906   15848 cli_runner.go:164] Run: docker container inspect multinode-284000-m03 --format={{.State.Status}}
	I0226 02:55:58.648275   15848 status.go:330] multinode-284000-m03 host status = "Stopped" (err=<nil>)
	I0226 02:55:58.648299   15848 status.go:343] host is not running, skipping remaining checks
	I0226 02:55:58.648308   15848 status.go:257] multinode-284000-m03 status: &{Name:multinode-284000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.98s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (13.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-284000 node start m03 --alsologtostderr: (12.548546794s)
multinode_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (13.61s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (99.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-284000
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-284000
multinode_test.go:318: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-284000: (22.803990339s)
multinode_test.go:323: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-284000 --wait=true -v=8 --alsologtostderr
multinode_test.go:323: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-284000 --wait=true -v=8 --alsologtostderr: (1m16.309328122s)
multinode_test.go:328: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-284000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (99.25s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-darwin-amd64 -p multinode-284000 node delete m03: (5.022620192s)
multinode_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.88s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 stop
E0226 02:57:59.877231   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
multinode_test.go:342: (dbg) Done: out/minikube-darwin-amd64 -p multinode-284000 stop: (21.487108761s)
multinode_test.go:348: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-284000 status: exit status 7 (165.27531ms)

                                                
                                                
-- stdout --
	multinode-284000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-284000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-284000 status --alsologtostderr: exit status 7 (167.13586ms)

                                                
                                                
-- stdout --
	multinode-284000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-284000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0226 02:58:19.113151   16314 out.go:291] Setting OutFile to fd 1 ...
	I0226 02:58:19.113315   16314 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 02:58:19.113321   16314 out.go:304] Setting ErrFile to fd 2...
	I0226 02:58:19.113325   16314 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 02:58:19.113507   16314 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18222-9538/.minikube/bin
	I0226 02:58:19.113691   16314 out.go:298] Setting JSON to false
	I0226 02:58:19.113712   16314 mustload.go:65] Loading cluster: multinode-284000
	I0226 02:58:19.113755   16314 notify.go:220] Checking for updates...
	I0226 02:58:19.114016   16314 config.go:182] Loaded profile config "multinode-284000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0226 02:58:19.114033   16314 status.go:255] checking status of multinode-284000 ...
	I0226 02:58:19.114417   16314 cli_runner.go:164] Run: docker container inspect multinode-284000 --format={{.State.Status}}
	I0226 02:58:19.165494   16314 status.go:330] multinode-284000 host status = "Stopped" (err=<nil>)
	I0226 02:58:19.165574   16314 status.go:343] host is not running, skipping remaining checks
	I0226 02:58:19.165587   16314 status.go:257] multinode-284000 status: &{Name:multinode-284000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0226 02:58:19.165620   16314 status.go:255] checking status of multinode-284000-m02 ...
	I0226 02:58:19.165905   16314 cli_runner.go:164] Run: docker container inspect multinode-284000-m02 --format={{.State.Status}}
	I0226 02:58:19.215811   16314 status.go:330] multinode-284000-m02 host status = "Stopped" (err=<nil>)
	I0226 02:58:19.215853   16314 status.go:343] host is not running, skipping remaining checks
	I0226 02:58:19.215864   16314 status.go:257] multinode-284000-m02 status: &{Name:multinode-284000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.82s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (62.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-284000 --wait=true -v=8 --alsologtostderr --driver=docker 
E0226 02:58:32.491689   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-284000 --wait=true -v=8 --alsologtostderr --driver=docker : (1m1.158188833s)
multinode_test.go:388: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-284000 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (62.05s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-284000
multinode_test.go:480: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-284000-m02 --driver=docker 
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-284000-m02 --driver=docker : exit status 14 (363.034851ms)

                                                
                                                
-- stdout --
	* [multinode-284000-m02] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18222
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18222-9538/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18222-9538/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-284000-m02' is duplicated with machine name 'multinode-284000-m02' in profile 'multinode-284000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-284000-m03 --driver=docker 
multinode_test.go:488: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-284000-m03 --driver=docker : (21.50016014s)
multinode_test.go:495: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-284000
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-284000: exit status 80 (479.193693ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-284000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-284000-m03 already exists in multinode-284000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-284000-m03
multinode_test.go:500: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-284000-m03: (2.404584282s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.81s)

                                                
                                    
x
+
TestPreload (182.71s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-834000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
E0226 02:59:55.546598   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-834000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m44.706489219s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-834000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-834000 image pull gcr.io/k8s-minikube/busybox: (5.824115725s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-834000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-834000: (10.82089188s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-834000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-834000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (58.620913844s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-834000 image list
helpers_test.go:175: Cleaning up "test-preload-834000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-834000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-834000: (2.43548302s)
--- PASS: TestPreload (182.71s)

                                                
                                    
x
+
TestScheduledStopUnix (95.45s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-703000 --memory=2048 --driver=docker 
E0226 03:02:59.906033   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-703000 --memory=2048 --driver=docker : (21.204068698s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-703000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-703000 -n scheduled-stop-703000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-703000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-703000 --cancel-scheduled
E0226 03:03:32.504713   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-703000 -n scheduled-stop-703000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-703000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-703000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-703000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-703000: exit status 7 (116.170106ms)

                                                
                                                
-- stdout --
	scheduled-stop-703000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-703000 -n scheduled-stop-703000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-703000 -n scheduled-stop-703000: exit status 7 (116.652176ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-703000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-703000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-703000: (2.158018099s)
--- PASS: TestScheduledStopUnix (95.45s)

                                                
                                    
x
+
TestSkaffold (130.71s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe1442297023 version
skaffold_test.go:59: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe1442297023 version: (1.710288985s)
skaffold_test.go:63: skaffold version: v2.10.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-341000 --memory=2600 --driver=docker 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-341000 --memory=2600 --driver=docker : (21.552417992s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe1442297023 run --minikube-profile skaffold-341000 --kube-context skaffold-341000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe1442297023 run --minikube-profile skaffold-341000 --kube-context skaffold-341000 --status-check=true --port-forward=false --interactive=false: (1m27.267421409s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-758d4b78d5-jzpl7" [58e7b7d6-b348-49e7-87c4-1776cd1bc3d5] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004398857s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-58b6497c8-9p42b" [a0bffe2d-aef2-4c2d-8a65-b4983876451d] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003877363s
helpers_test.go:175: Cleaning up "skaffold-341000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-341000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-341000: (3.079887968s)
--- PASS: TestSkaffold (130.71s)

                                                
                                    
x
+
TestInsufficientStorage (10.61s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-502000 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-502000 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (7.610403864s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"358245b2-1dd6-430c-ba1d-4801e206e1ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-502000] minikube v1.32.0 on Darwin 14.3.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e3c367a2-11d7-4d4d-a830-a2b902886840","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18222"}}
	{"specversion":"1.0","id":"e084ddb5-ec15-43fc-95db-5b89b0ca25b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18222-9538/kubeconfig"}}
	{"specversion":"1.0","id":"eb0b5e21-3482-4aee-80e8-e3c2277113b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"20b765ad-9506-4b3d-9a43-b2e4c9c7bfd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"878ff418-b45c-40d4-a02b-048120945b49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18222-9538/.minikube"}}
	{"specversion":"1.0","id":"a4e5ca27-0090-4664-ab89-4188a082ffd5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7593293d-5663-4c47-8b9f-35d476371da8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"586b0703-78de-4f82-bd9f-b1cb53286fba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"4962c144-be06-4ba1-b621-c9fbf5326312","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"24117fa6-f133-4f38-8e79-a31d5db4b60e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"77b5cb68-ab28-4170-a2d3-361654e62543","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-502000 in cluster insufficient-storage-502000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"469ad580-763c-49b5-a1eb-ceb7deb9a372","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1708008208-17936 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"7a5d1a8f-a1d9-4890-ab7f-174f3cad9dc3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"db37107d-3f2c-4ef1-b092-48c916538c9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-502000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-502000 --output=json --layout=cluster: exit status 7 (401.170118ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-502000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-502000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0226 03:06:47.468462   17707 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-502000" does not appear in /Users/jenkins/minikube-integration/18222-9538/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-502000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-502000 --output=json --layout=cluster: exit status 7 (397.266415ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-502000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-502000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0226 03:06:47.871856   17719 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-502000" does not appear in /Users/jenkins/minikube-integration/18222-9538/kubeconfig
	E0226 03:06:47.888546   17719 status.go:559] unable to read event log: stat: stat /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/insufficient-storage-502000/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-502000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-502000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-502000: (2.203050041s)
--- PASS: TestInsufficientStorage (10.61s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (190s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.2083993981 start -p running-upgrade-472000 --memory=2200 --vm-driver=docker 
E0226 03:11:02.956029   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
E0226 03:11:25.390813   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/skaffold-341000/client.crt: no such file or directory
E0226 03:11:25.396027   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/skaffold-341000/client.crt: no such file or directory
E0226 03:11:25.406127   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/skaffold-341000/client.crt: no such file or directory
E0226 03:11:25.426384   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/skaffold-341000/client.crt: no such file or directory
E0226 03:11:25.466547   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/skaffold-341000/client.crt: no such file or directory
E0226 03:11:25.546674   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/skaffold-341000/client.crt: no such file or directory
E0226 03:11:25.706906   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/skaffold-341000/client.crt: no such file or directory
E0226 03:11:26.027416   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/skaffold-341000/client.crt: no such file or directory
E0226 03:11:26.667581   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/skaffold-341000/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.2083993981 start -p running-upgrade-472000 --memory=2200 --vm-driver=docker : (2m24.97318922s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-472000 --memory=2200 --alsologtostderr -v=1 --driver=docker 
E0226 03:11:27.948531   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/skaffold-341000/client.crt: no such file or directory
E0226 03:11:30.508711   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/skaffold-341000/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-472000 --memory=2200 --alsologtostderr -v=1 --driver=docker : (37.480012463s)
helpers_test.go:175: Cleaning up "running-upgrade-472000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-472000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-472000: (2.476422932s)
--- PASS: TestRunningBinaryUpgrade (190.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (107.68s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.2394193706 start -p missing-upgrade-390000 --memory=2200 --driver=docker 
E0226 03:12:06.350807   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/skaffold-341000/client.crt: no such file or directory
version_upgrade_test.go:309: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.2394193706 start -p missing-upgrade-390000 --memory=2200 --driver=docker : (32.971612661s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-390000
E0226 03:12:47.312688   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/skaffold-341000/client.crt: no such file or directory
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-390000: (10.238036738s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-390000
version_upgrade_test.go:329: (dbg) Run:  out/minikube-darwin-amd64 start -p missing-upgrade-390000 --memory=2200 --alsologtostderr -v=1 --driver=docker 
E0226 03:12:59.908089   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
E0226 03:13:32.506753   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
version_upgrade_test.go:329: (dbg) Done: out/minikube-darwin-amd64 start -p missing-upgrade-390000 --memory=2200 --alsologtostderr -v=1 --driver=docker : (56.744896231s)
helpers_test.go:175: Cleaning up "missing-upgrade-390000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-390000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-390000: (2.442263203s)
--- PASS: TestMissingContainerUpgrade (107.68s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (21.83s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=18222
- KUBECONFIG=/Users/jenkins/minikube-integration/18222-9538/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3378959397/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3378959397/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3378959397/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3378959397/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (21.83s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (24.52s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=18222
- KUBECONFIG=/Users/jenkins/minikube-integration/18222-9538/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3122697873/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3122697873/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3122697873/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3122697873/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (24.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (5.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (5.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (73.21s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1984760098 start -p stopped-upgrade-590000 --memory=2200 --vm-driver=docker 
E0226 03:14:09.233905   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/skaffold-341000/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1984760098 start -p stopped-upgrade-590000 --memory=2200 --vm-driver=docker : (31.384409086s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1984760098 -p stopped-upgrade-590000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1984760098 -p stopped-upgrade-590000 stop: (12.249358728s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-590000 --memory=2200 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:198: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-590000 --memory=2200 --alsologtostderr -v=1 --driver=docker : (29.570641179s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (73.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-590000
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-590000: (3.028053887s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.03s)

                                                
                                    
x
+
TestPause/serial/Start (35.9s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-543000 --memory=2048 --install-addons=false --wait=all --driver=docker 
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-543000 --memory=2048 --install-addons=false --wait=all --driver=docker : (35.896718207s)
--- PASS: TestPause/serial/Start (35.90s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (41.23s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-543000 --alsologtostderr -v=1 --driver=docker 
E0226 03:16:25.394088   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/skaffold-341000/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-543000 --alsologtostderr -v=1 --driver=docker : (41.209026384s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (41.23s)

                                                
                                    
x
+
TestPause/serial/Pause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-543000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.64s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-543000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-543000 --output=json --layout=cluster: exit status 2 (419.94302ms)

                                                
                                                
-- stdout --
	{"Name":"pause-543000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-543000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.42s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-543000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.72s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.7s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-543000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.70s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.58s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-543000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-543000 --alsologtostderr -v=5: (2.576052422s)
--- PASS: TestPause/serial/DeletePaused (2.58s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.61s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
E0226 03:16:35.551937   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-543000
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-543000: exit status 1 (55.242927ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-543000: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-970000 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-970000 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (433.361121ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-970000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18222
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18222-9538/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18222-9538/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (23.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-970000 --driver=docker 
E0226 03:16:53.076371   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/skaffold-341000/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-970000 --driver=docker : (23.484140674s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-970000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (23.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-970000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-970000 --no-kubernetes --driver=docker : (14.815678129s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-970000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-970000 status -o json: exit status 2 (401.941943ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-970000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-970000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-970000: (2.182569466s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-970000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-970000 --no-kubernetes --driver=docker : (7.028236259s)
--- PASS: TestNoKubernetes/serial/Start (7.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-970000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-970000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (378.265828ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (26.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-amd64 profile list: (12.920045682s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-amd64 profile list --output=json: (13.416555364s)
--- PASS: TestNoKubernetes/serial/ProfileList (26.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-970000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-970000: (1.535299419s)
--- PASS: TestNoKubernetes/serial/Stop (1.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-970000 --driver=docker 
E0226 03:17:59.921904   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-970000 --driver=docker : (7.956275685s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-970000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-970000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (417.410404ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (74.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-722000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p auto-722000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker : (1m14.920988044s)
--- PASS: TestNetworkPlugins/group/auto/Start (74.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (51.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-722000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-722000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker : (51.042000735s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (51.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-722000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (15.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-722000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-q54hv" [0228695b-e745-4e68-8c64-9d3a0a1ec550] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-q54hv" [0228695b-e745-4e68-8c64-9d3a0a1ec550] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 15.004341443s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (15.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-722000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-722000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-722000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-qdzf2" [a615c274-15f7-4c59-84a0-1da5b35b0219] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004394002s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-722000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (15.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-722000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ckxz4" [2fe448cc-d3ec-4f7e-8d29-1807c143e8ea] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-ckxz4" [2fe448cc-d3ec-4f7e-8d29-1807c143e8ea] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 15.003273939s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (15.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (49.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-722000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-722000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker : (49.136044491s)
--- PASS: TestNetworkPlugins/group/flannel/Start (49.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-722000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-722000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-722000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (37.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-722000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-722000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker : (37.865992121s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (37.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-t48cz" [1e8a2ea9-30b7-45ce-bc6b-1b7392da1a7f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004597325s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-722000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (14.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-722000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-swgxd" [bd3868ab-2db5-4cef-889b-237408771f5b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-swgxd" [bd3868ab-2db5-4cef-889b-237408771f5b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 14.00401577s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (14.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-722000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-722000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-n65t6" [17b08768-1fb5-47f0-8a27-f85c57c96fcd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-n65t6" [17b08768-1fb5-47f0-8a27-f85c57c96fcd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.004659715s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-722000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-722000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-722000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-722000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-722000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-722000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (39.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-722000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-722000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker : (39.250318678s)
--- PASS: TestNetworkPlugins/group/bridge/Start (39.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (76.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-722000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-722000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker : (1m16.108541869s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (76.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-722000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (15.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-722000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jhdh8" [a400cf00-0b04-497c-9e73-cfb65f75ad3f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jhdh8" [a400cf00-0b04-497c-9e73-cfb65f75ad3f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 15.004546754s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (15.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-722000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-722000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-722000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (49.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-722000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker 
E0226 03:22:59.923619   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-722000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker : (49.954890522s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (49.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-722000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (14.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-722000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-chbpm" [6521e3a6-70bb-4cda-960b-ad502fd1ddd2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-chbpm" [6521e3a6-70bb-4cda-960b-ad502fd1ddd2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 14.004717929s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (14.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-722000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-722000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-722000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (164.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-722000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p calico-722000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker : (2m44.380173604s)
--- PASS: TestNetworkPlugins/group/calico/Start (164.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-722000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (14.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-722000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-sf4g2" [58eb5db5-d5ce-441f-8b6b-10908242f285] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-sf4g2" [58eb5db5-d5ce-441f-8b6b-10908242f285] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 14.004656735s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (14.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-722000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-722000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-722000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (38.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-722000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker 
E0226 03:24:20.468416   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/client.crt: no such file or directory
E0226 03:24:21.749621   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/client.crt: no such file or directory
E0226 03:24:24.310254   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/client.crt: no such file or directory
E0226 03:24:29.430473   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/client.crt: no such file or directory
E0226 03:24:39.670726   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/client.crt: no such file or directory
E0226 03:24:43.735114   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kindnet-722000/client.crt: no such file or directory
E0226 03:24:43.740502   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kindnet-722000/client.crt: no such file or directory
E0226 03:24:43.750646   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kindnet-722000/client.crt: no such file or directory
E0226 03:24:43.770794   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kindnet-722000/client.crt: no such file or directory
E0226 03:24:43.811008   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kindnet-722000/client.crt: no such file or directory
E0226 03:24:43.891153   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kindnet-722000/client.crt: no such file or directory
E0226 03:24:44.051249   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kindnet-722000/client.crt: no such file or directory
E0226 03:24:44.371920   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kindnet-722000/client.crt: no such file or directory
E0226 03:24:45.012660   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kindnet-722000/client.crt: no such file or directory
E0226 03:24:46.293649   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kindnet-722000/client.crt: no such file or directory
E0226 03:24:48.854803   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kindnet-722000/client.crt: no such file or directory
E0226 03:24:53.976151   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kindnet-722000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p false-722000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker : (38.813789886s)
--- PASS: TestNetworkPlugins/group/false/Start (38.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-722000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (14.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-722000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xp5d2" [70b55e5d-73b1-4818-9a77-68f2efcf269f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0226 03:25:00.151794   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/client.crt: no such file or directory
E0226 03:25:04.216410   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kindnet-722000/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-xp5d2" [70b55e5d-73b1-4818-9a77-68f2efcf269f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 14.004685337s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (14.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-722000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-722000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-722000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-g9z9q" [11b387b1-258f-405b-9bd1-18321925a6cc] Running
E0226 03:26:25.407898   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/skaffold-341000/client.crt: no such file or directory
E0226 03:26:27.845282   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/flannel-722000/client.crt: no such file or directory
E0226 03:26:28.021377   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/enable-default-cni-722000/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004674361s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-722000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-722000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fk6s4" [a645410a-ed88-4f34-94ac-f6efbcf1ead5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fk6s4" [a645410a-ed88-4f34-94ac-f6efbcf1ead5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.007227947s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-722000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-722000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-722000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (80.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-136000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.29.0-rc.2
E0226 03:27:08.806931   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/flannel-722000/client.crt: no such file or directory
E0226 03:27:11.833238   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/bridge-722000/client.crt: no such file or directory
E0226 03:27:11.838325   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/bridge-722000/client.crt: no such file or directory
E0226 03:27:11.850360   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/bridge-722000/client.crt: no such file or directory
E0226 03:27:11.870806   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/bridge-722000/client.crt: no such file or directory
E0226 03:27:11.911297   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/bridge-722000/client.crt: no such file or directory
E0226 03:27:11.991930   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/bridge-722000/client.crt: no such file or directory
E0226 03:27:12.152062   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/bridge-722000/client.crt: no such file or directory
E0226 03:27:12.472800   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/bridge-722000/client.crt: no such file or directory
E0226 03:27:13.113557   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/bridge-722000/client.crt: no such file or directory
E0226 03:27:14.394032   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/bridge-722000/client.crt: no such file or directory
E0226 03:27:16.954182   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/bridge-722000/client.crt: no such file or directory
E0226 03:27:22.074652   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/bridge-722000/client.crt: no such file or directory
E0226 03:27:27.578852   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kindnet-722000/client.crt: no such file or directory
E0226 03:27:29.462132   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/enable-default-cni-722000/client.crt: no such file or directory
E0226 03:27:32.315392   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/bridge-722000/client.crt: no such file or directory
E0226 03:27:42.975670   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
E0226 03:27:48.452322   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/skaffold-341000/client.crt: no such file or directory
E0226 03:27:52.797777   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/bridge-722000/client.crt: no such file or directory
E0226 03:27:59.925549   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
E0226 03:28:01.490951   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubenet-722000/client.crt: no such file or directory
E0226 03:28:01.496065   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubenet-722000/client.crt: no such file or directory
E0226 03:28:01.506727   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubenet-722000/client.crt: no such file or directory
E0226 03:28:01.526959   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubenet-722000/client.crt: no such file or directory
E0226 03:28:01.567061   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubenet-722000/client.crt: no such file or directory
E0226 03:28:01.647190   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubenet-722000/client.crt: no such file or directory
E0226 03:28:01.807393   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubenet-722000/client.crt: no such file or directory
E0226 03:28:02.127909   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubenet-722000/client.crt: no such file or directory
E0226 03:28:02.769041   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubenet-722000/client.crt: no such file or directory
E0226 03:28:04.050261   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubenet-722000/client.crt: no such file or directory
E0226 03:28:06.612301   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubenet-722000/client.crt: no such file or directory
E0226 03:28:11.732600   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubenet-722000/client.crt: no such file or directory
E0226 03:28:21.974108   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubenet-722000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-136000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.29.0-rc.2: (1m20.92847654s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (80.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (14.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-136000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c06cd0ad-efb6-4ad9-a81e-8666355c120d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0226 03:28:30.728431   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/flannel-722000/client.crt: no such file or directory
E0226 03:28:32.524250   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 03:28:33.760186   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/bridge-722000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [c06cd0ad-efb6-4ad9-a81e-8666355c120d] Running
E0226 03:28:40.745369   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/custom-flannel-722000/client.crt: no such file or directory
E0226 03:28:40.751071   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/custom-flannel-722000/client.crt: no such file or directory
E0226 03:28:40.762240   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/custom-flannel-722000/client.crt: no such file or directory
E0226 03:28:40.782958   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/custom-flannel-722000/client.crt: no such file or directory
E0226 03:28:40.823307   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/custom-flannel-722000/client.crt: no such file or directory
E0226 03:28:40.905438   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/custom-flannel-722000/client.crt: no such file or directory
E0226 03:28:41.065871   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/custom-flannel-722000/client.crt: no such file or directory
E0226 03:28:41.386997   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/custom-flannel-722000/client.crt: no such file or directory
E0226 03:28:42.034325   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/custom-flannel-722000/client.crt: no such file or directory
E0226 03:28:42.454533   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubenet-722000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 14.005234954s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-136000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (14.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-136000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0226 03:28:43.314953   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/custom-flannel-722000/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-136000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.044315537s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-136000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-136000 --alsologtostderr -v=3
E0226 03:28:45.876552   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/custom-flannel-722000/client.crt: no such file or directory
E0226 03:28:50.996849   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/custom-flannel-722000/client.crt: no such file or directory
E0226 03:28:51.384862   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/enable-default-cni-722000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-136000 --alsologtostderr -v=3: (11.004507581s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-136000 -n no-preload-136000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-136000 -n no-preload-136000: exit status 7 (114.129021ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-136000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (313.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-136000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.29.0-rc.2
E0226 03:29:01.237229   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/custom-flannel-722000/client.crt: no such file or directory
E0226 03:29:19.190366   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/client.crt: no such file or directory
E0226 03:29:21.719462   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/custom-flannel-722000/client.crt: no such file or directory
E0226 03:29:23.415564   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubenet-722000/client.crt: no such file or directory
E0226 03:29:43.737545   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kindnet-722000/client.crt: no such file or directory
E0226 03:29:46.874500   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/client.crt: no such file or directory
E0226 03:29:55.681076   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/bridge-722000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-136000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.29.0-rc.2: (5m12.693533453s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-136000 -n no-preload-136000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (313.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-326000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-326000 --alsologtostderr -v=3: (1.564488452s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000: exit status 7 (115.996715ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-326000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-j9j7q" [0af02977-bc6a-407b-8984-efc068257f97] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0226 03:34:08.482284   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/custom-flannel-722000/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-j9j7q" [0af02977-bc6a-407b-8984-efc068257f97] Running
E0226 03:34:19.231100   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.004074163s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-j9j7q" [0af02977-bc6a-407b-8984-efc068257f97] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004978478s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-136000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-136000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-136000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-136000 -n no-preload-136000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-136000 -n no-preload-136000: exit status 2 (424.98248ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-136000 -n no-preload-136000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-136000 -n no-preload-136000: exit status 2 (437.630642ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-136000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-136000 -n no-preload-136000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-136000 -n no-preload-136000
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (75.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-624000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.28.4
E0226 03:34:43.779803   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kindnet-722000/client.crt: no such file or directory
E0226 03:34:59.572325   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/false-722000/client.crt: no such file or directory
E0226 03:35:27.261092   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/false-722000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-624000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.28.4: (1m15.123046435s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (75.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (14.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-624000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1b4d1e9c-4065-47a8-8a74-f87189d5584a] Pending
E0226 03:35:46.925542   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/flannel-722000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [1b4d1e9c-4065-47a8-8a74-f87189d5584a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1b4d1e9c-4065-47a8-8a74-f87189d5584a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 14.003101783s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-624000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (14.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-624000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-624000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.188570247s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-624000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-624000 --alsologtostderr -v=3
E0226 03:36:07.582038   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/enable-default-cni-722000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-624000 --alsologtostderr -v=3: (10.965028389s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-624000 -n embed-certs-624000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-624000 -n embed-certs-624000: exit status 7 (117.375788ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-624000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (312.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-624000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.28.4
E0226 03:36:23.947467   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/calico-722000/client.crt: no such file or directory
E0226 03:36:25.453542   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/skaffold-341000/client.crt: no such file or directory
E0226 03:36:51.635202   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/calico-722000/client.crt: no such file or directory
E0226 03:37:11.877295   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/bridge-722000/client.crt: no such file or directory
E0226 03:37:59.970051   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
E0226 03:38:01.535286   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kubenet-722000/client.crt: no such file or directory
E0226 03:38:28.615524   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/no-preload-136000/client.crt: no such file or directory
E0226 03:38:28.621401   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/no-preload-136000/client.crt: no such file or directory
E0226 03:38:28.633611   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/no-preload-136000/client.crt: no such file or directory
E0226 03:38:28.654134   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/no-preload-136000/client.crt: no such file or directory
E0226 03:38:28.694640   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/no-preload-136000/client.crt: no such file or directory
E0226 03:38:28.775553   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/no-preload-136000/client.crt: no such file or directory
E0226 03:38:28.935749   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/no-preload-136000/client.crt: no such file or directory
E0226 03:38:29.257985   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/no-preload-136000/client.crt: no such file or directory
E0226 03:38:29.898278   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/no-preload-136000/client.crt: no such file or directory
E0226 03:38:31.178636   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/no-preload-136000/client.crt: no such file or directory
E0226 03:38:32.568842   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 03:38:33.739269   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/no-preload-136000/client.crt: no such file or directory
E0226 03:38:38.859514   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/no-preload-136000/client.crt: no such file or directory
E0226 03:38:40.789612   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/custom-flannel-722000/client.crt: no such file or directory
E0226 03:38:49.100533   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/no-preload-136000/client.crt: no such file or directory
E0226 03:39:09.582080   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/no-preload-136000/client.crt: no such file or directory
E0226 03:39:19.235714   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/auto-722000/client.crt: no such file or directory
E0226 03:39:43.782385   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/kindnet-722000/client.crt: no such file or directory
E0226 03:39:50.543129   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/no-preload-136000/client.crt: no such file or directory
E0226 03:39:59.576740   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/false-722000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-624000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.28.4: (5m11.862543689s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-624000 -n embed-certs-624000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (312.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-95hpz" [155d09ff-3be6-4659-adee-52481b737428] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0226 03:41:25.456998   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/skaffold-341000/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-95hpz" [155d09ff-3be6-4659-adee-52481b737428] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.004155244s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-95hpz" [155d09ff-3be6-4659-adee-52481b737428] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005411557s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-624000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-624000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-624000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-624000 -n embed-certs-624000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-624000 -n embed-certs-624000: exit status 2 (421.98036ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-624000 -n embed-certs-624000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-624000 -n embed-certs-624000: exit status 2 (424.249175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-624000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-624000 -n embed-certs-624000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-624000 -n embed-certs-624000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (37.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-145000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-145000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.28.4: (37.171172198s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (37.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (13.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-145000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [707ad875-0815-4d9d-a2fa-f08eb5bc3109] Pending
helpers_test.go:344: "busybox" [707ad875-0815-4d9d-a2fa-f08eb5bc3109] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [707ad875-0815-4d9d-a2fa-f08eb5bc3109] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 13.004795978s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-145000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (13.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-145000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-145000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.105007089s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-145000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-145000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-145000 --alsologtostderr -v=3: (10.997720166s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-145000 -n default-k8s-diff-port-145000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-145000 -n default-k8s-diff-port-145000: exit status 7 (119.103806ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-145000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (312.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-145000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.28.4
E0226 03:42:59.973537   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-145000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.28.4: (5m11.990382248s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-145000 -n default-k8s-diff-port-145000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (312.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-wgwh6" [b82d63ae-0356-4bb2-ba8c-7f5315c4a538] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-wgwh6" [b82d63ae-0356-4bb2-ba8c-7f5315c4a538] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.004040943s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-wgwh6" [b82d63ae-0356-4bb2-ba8c-7f5315c4a538] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006779888s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-145000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-diff-port-145000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-145000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-145000 -n default-k8s-diff-port-145000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-145000 -n default-k8s-diff-port-145000: exit status 2 (424.370003ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-145000 -n default-k8s-diff-port-145000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-145000 -n default-k8s-diff-port-145000: exit status 2 (430.562717ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-145000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-145000 -n default-k8s-diff-port-145000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-145000 -n default-k8s-diff-port-145000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (34.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-340000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.29.0-rc.2
E0226 03:48:28.613062   10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/no-preload-136000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-340000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.29.0-rc.2: (34.991310053s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (34.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-340000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-340000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.079340371s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (5.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-340000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-340000 --alsologtostderr -v=3: (5.927451612s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (5.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-340000 -n newest-cni-340000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-340000 -n newest-cni-340000: exit status 7 (115.881577ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-340000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (28.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-340000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-340000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.29.0-rc.2: (28.000402213s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-340000 -n newest-cni-340000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (28.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-340000 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-340000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-340000 -n newest-cni-340000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-340000 -n newest-cni-340000: exit status 2 (426.303583ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-340000 -n newest-cni-340000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-340000 -n newest-cni-340000: exit status 2 (421.869191ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-340000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-340000 -n newest-cni-340000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-340000 -n newest-cni-340000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.15s)

                                                
                                    

Test skip (21/333)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 16.951085ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-stmf6" [2273f03c-d454-43ac-87cd-30695a100307] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005207372s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-cp5jr" [82afadff-8f17-46b5-bced-5c12c72478cb] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005603377s
addons_test.go:340: (dbg) Run:  kubectl --context addons-108000 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-108000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-108000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.962611244s)
addons_test.go:355: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (18.05s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (11.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-108000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-108000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-108000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [52240569-cc36-4030-b04e-fae6f7c92fec] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [52240569-cc36-4030-b04e-fae6f7c92fec] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004549203s
addons_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 -p addons-108000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:282: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (11.91s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (16.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-349000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-349000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-smnhw" [5fc76425-1203-40ce-b120-a88c001acd3e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-smnhw" [5fc76425-1203-40ce-b120-a88c001acd3e] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 16.005024798s
functional_test.go:1642: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (16.14s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-722000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-722000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-722000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-722000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-722000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-722000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-722000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-722000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-722000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-722000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-722000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-722000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-722000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-722000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-722000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-722000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-722000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-722000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-722000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-722000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-722000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-722000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-722000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-722000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-722000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-722000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-722000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-722000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-722000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-722000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-722000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-722000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-722000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-722000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-722000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-722000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-722000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-722000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-722000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-722000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-722000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-722000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-722000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-722000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-722000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-722000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-722000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-722000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-722000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-722000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-722000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-722000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-722000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-722000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-722000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-722000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-722000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-722000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-722000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-722000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-722000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-722000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-722000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-722000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-722000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722000"

                                                
                                                
----------------------- debugLogs end: cilium-722000 [took: 6.406566979s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-722000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-722000
--- SKIP: TestNetworkPlugins/group/cilium (6.84s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-553000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-553000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.40s)

                                                
                                    
Copied to clipboard