Test Report: Docker_macOS 17866

                    
                      8c6a2e99755a9a0a7d8f4ed404c065becb2fd234:2024-01-08:32612
                    
                

Test fail (14/329)

x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (261.6s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-134000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0108 18:45:24.131135   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0108 18:45:50.517589   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
E0108 18:45:50.523312   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
E0108 18:45:50.535473   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
E0108 18:45:50.557832   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
E0108 18:45:50.599211   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
E0108 18:45:50.681340   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
E0108 18:45:50.843547   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
E0108 18:45:51.165356   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
E0108 18:45:51.805732   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
E0108 18:45:51.822928   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0108 18:45:53.086526   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
E0108 18:45:55.646959   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
E0108 18:46:00.767297   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
E0108 18:46:11.007936   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
E0108 18:46:31.487972   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
E0108 18:47:12.448991   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-134000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m21.55641026s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-134000] minikube v1.32.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17866
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17866-74927/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-134000 in cluster ingress-addon-legacy-134000
	* Pulling base image v0.0.42-1704759386-17866 ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 18:43:17.435562   78141 out.go:296] Setting OutFile to fd 1 ...
	I0108 18:43:17.435843   78141 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 18:43:17.435852   78141 out.go:309] Setting ErrFile to fd 2...
	I0108 18:43:17.435859   78141 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 18:43:17.436200   78141 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17866-74927/.minikube/bin
	I0108 18:43:17.438258   78141 out.go:303] Setting JSON to false
	I0108 18:43:17.465564   78141 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":34969,"bootTime":1704733228,"procs":484,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0108 18:43:17.465679   78141 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0108 18:43:17.486987   78141 out.go:177] * [ingress-addon-legacy-134000] minikube v1.32.0 on Darwin 14.2.1
	I0108 18:43:17.509156   78141 notify.go:220] Checking for updates...
	I0108 18:43:17.529878   78141 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 18:43:17.573745   78141 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17866-74927/kubeconfig
	I0108 18:43:17.616791   78141 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 18:43:17.658634   78141 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 18:43:17.679840   78141 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube
	I0108 18:43:17.721761   78141 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 18:43:17.743241   78141 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 18:43:17.801579   78141 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0108 18:43:17.801744   78141 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 18:43:17.909900   78141 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:63 SystemTime:2024-01-09 02:43:17.899945498 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0108 18:43:17.930956   78141 out.go:177] * Using the docker driver based on user configuration
	I0108 18:43:17.973074   78141 start.go:298] selected driver: docker
	I0108 18:43:17.973104   78141 start.go:902] validating driver "docker" against <nil>
	I0108 18:43:17.973118   78141 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 18:43:17.977575   78141 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 18:43:18.080151   78141 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:63 SystemTime:2024-01-09 02:43:18.070736133 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0108 18:43:18.080355   78141 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0108 18:43:18.080539   78141 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 18:43:18.102049   78141 out.go:177] * Using Docker Desktop driver with root privileges
	I0108 18:43:18.122764   78141 cni.go:84] Creating CNI manager for ""
	I0108 18:43:18.122804   78141 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0108 18:43:18.122825   78141 start_flags.go:321] config:
	{Name:ingress-addon-legacy-134000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-134000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 18:43:18.145109   78141 out.go:177] * Starting control plane node ingress-addon-legacy-134000 in cluster ingress-addon-legacy-134000
	I0108 18:43:18.166947   78141 cache.go:121] Beginning downloading kic base image for docker with docker
	I0108 18:43:18.188570   78141 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0108 18:43:18.230920   78141 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0108 18:43:18.231001   78141 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0108 18:43:18.283070   78141 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0108 18:43:18.283095   78141 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0108 18:43:18.284472   78141 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0108 18:43:18.284486   78141 cache.go:56] Caching tarball of preloaded images
	I0108 18:43:18.284681   78141 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0108 18:43:18.306485   78141 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0108 18:43:18.348258   78141 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0108 18:43:18.427306   78141 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0108 18:43:25.492415   78141 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0108 18:43:25.492597   78141 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0108 18:43:26.123641   78141 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0108 18:43:26.123966   78141 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/config.json ...
	I0108 18:43:26.123993   78141 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/config.json: {Name:mk96f6108a6d2d92aa0942f6b6515cfeb1c7d186 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 18:43:26.124322   78141 cache.go:194] Successfully downloaded all kic artifacts
	I0108 18:43:26.124357   78141 start.go:365] acquiring machines lock for ingress-addon-legacy-134000: {Name:mk10b614d1fdefcebb96221272b7d22008caaa38 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 18:43:26.124487   78141 start.go:369] acquired machines lock for "ingress-addon-legacy-134000" in 122.978µs
	I0108 18:43:26.124509   78141 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-134000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-134000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0108 18:43:26.124580   78141 start.go:125] createHost starting for "" (driver="docker")
	I0108 18:43:26.176315   78141 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0108 18:43:26.176614   78141 start.go:159] libmachine.API.Create for "ingress-addon-legacy-134000" (driver="docker")
	I0108 18:43:26.176663   78141 client.go:168] LocalClient.Create starting
	I0108 18:43:26.176862   78141 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem
	I0108 18:43:26.176954   78141 main.go:141] libmachine: Decoding PEM data...
	I0108 18:43:26.176985   78141 main.go:141] libmachine: Parsing certificate...
	I0108 18:43:26.177075   78141 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem
	I0108 18:43:26.177148   78141 main.go:141] libmachine: Decoding PEM data...
	I0108 18:43:26.177165   78141 main.go:141] libmachine: Parsing certificate...
	I0108 18:43:26.177982   78141 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-134000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0108 18:43:26.233731   78141 cli_runner.go:211] docker network inspect ingress-addon-legacy-134000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0108 18:43:26.233864   78141 network_create.go:281] running [docker network inspect ingress-addon-legacy-134000] to gather additional debugging logs...
	I0108 18:43:26.233887   78141 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-134000
	W0108 18:43:26.285670   78141 cli_runner.go:211] docker network inspect ingress-addon-legacy-134000 returned with exit code 1
	I0108 18:43:26.285716   78141 network_create.go:284] error running [docker network inspect ingress-addon-legacy-134000]: docker network inspect ingress-addon-legacy-134000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-134000 not found
	I0108 18:43:26.285739   78141 network_create.go:286] output of [docker network inspect ingress-addon-legacy-134000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-134000 not found
	
	** /stderr **
	I0108 18:43:26.285903   78141 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 18:43:26.337016   78141 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00061a3d0}
	I0108 18:43:26.337058   78141 network_create.go:124] attempt to create docker network ingress-addon-legacy-134000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
	I0108 18:43:26.337130   78141 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-134000 ingress-addon-legacy-134000
	I0108 18:43:26.422752   78141 network_create.go:108] docker network ingress-addon-legacy-134000 192.168.49.0/24 created
	I0108 18:43:26.422812   78141 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-134000" container
	I0108 18:43:26.422922   78141 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 18:43:26.474090   78141 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-134000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-134000 --label created_by.minikube.sigs.k8s.io=true
	I0108 18:43:26.525781   78141 oci.go:103] Successfully created a docker volume ingress-addon-legacy-134000
	I0108 18:43:26.525924   78141 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-134000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-134000 --entrypoint /usr/bin/test -v ingress-addon-legacy-134000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0108 18:43:26.913805   78141 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-134000
	I0108 18:43:26.913871   78141 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0108 18:43:26.913885   78141 kic.go:194] Starting extracting preloaded images to volume ...
	I0108 18:43:26.914005   78141 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-134000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0108 18:43:29.144215   78141 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-134000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (2.23015887s)
	I0108 18:43:29.144239   78141 kic.go:203] duration metric: took 2.230374 seconds to extract preloaded images to volume
	I0108 18:43:29.144357   78141 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 18:43:29.244509   78141 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-134000 --name ingress-addon-legacy-134000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-134000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-134000 --network ingress-addon-legacy-134000 --ip 192.168.49.2 --volume ingress-addon-legacy-134000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0108 18:43:29.520261   78141 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-134000 --format={{.State.Running}}
	I0108 18:43:29.574612   78141 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-134000 --format={{.State.Status}}
	I0108 18:43:29.631576   78141 cli_runner.go:164] Run: docker exec ingress-addon-legacy-134000 stat /var/lib/dpkg/alternatives/iptables
	I0108 18:43:29.791731   78141 oci.go:144] the created container "ingress-addon-legacy-134000" has a running status.
	I0108 18:43:29.791777   78141 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/ingress-addon-legacy-134000/id_rsa...
	I0108 18:43:29.937747   78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/ingress-addon-legacy-134000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0108 18:43:29.937813   78141 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/ingress-addon-legacy-134000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 18:43:30.000420   78141 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-134000 --format={{.State.Status}}
	I0108 18:43:30.055095   78141 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 18:43:30.055116   78141 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-134000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 18:43:30.153079   78141 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-134000 --format={{.State.Status}}
	I0108 18:43:30.205079   78141 machine.go:88] provisioning docker machine ...
	I0108 18:43:30.205129   78141 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-134000"
	I0108 18:43:30.205243   78141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-134000
	I0108 18:43:30.256351   78141 main.go:141] libmachine: Using SSH client type: native
	I0108 18:43:30.256685   78141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 63373 <nil> <nil>}
	I0108 18:43:30.256702   78141 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-134000 && echo "ingress-addon-legacy-134000" | sudo tee /etc/hostname
	I0108 18:43:30.401949   78141 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-134000
	
	I0108 18:43:30.402064   78141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-134000
	I0108 18:43:30.454199   78141 main.go:141] libmachine: Using SSH client type: native
	I0108 18:43:30.454493   78141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 63373 <nil> <nil>}
	I0108 18:43:30.454520   78141 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-134000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-134000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-134000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 18:43:30.588685   78141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 18:43:30.588710   78141 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17866-74927/.minikube CaCertPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17866-74927/.minikube}
	I0108 18:43:30.588731   78141 ubuntu.go:177] setting up certificates
	I0108 18:43:30.588744   78141 provision.go:83] configureAuth start
	I0108 18:43:30.588824   78141 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-134000
	I0108 18:43:30.639879   78141 provision.go:138] copyHostCerts
	I0108 18:43:30.639926   78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.pem
	I0108 18:43:30.639983   78141 exec_runner.go:144] found /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.pem, removing ...
	I0108 18:43:30.639991   78141 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.pem
	I0108 18:43:30.640117   78141 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.pem (1078 bytes)
	I0108 18:43:30.640311   78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17866-74927/.minikube/cert.pem
	I0108 18:43:30.640341   78141 exec_runner.go:144] found /Users/jenkins/minikube-integration/17866-74927/.minikube/cert.pem, removing ...
	I0108 18:43:30.640346   78141 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17866-74927/.minikube/cert.pem
	I0108 18:43:30.640445   78141 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17866-74927/.minikube/cert.pem (1123 bytes)
	I0108 18:43:30.640584   78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17866-74927/.minikube/key.pem
	I0108 18:43:30.640622   78141 exec_runner.go:144] found /Users/jenkins/minikube-integration/17866-74927/.minikube/key.pem, removing ...
	I0108 18:43:30.640626   78141 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17866-74927/.minikube/key.pem
	I0108 18:43:30.640731   78141 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17866-74927/.minikube/key.pem (1679 bytes)
	I0108 18:43:30.640916   78141 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-134000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-134000]
	I0108 18:43:30.743500   78141 provision.go:172] copyRemoteCerts
	I0108 18:43:30.743549   78141 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 18:43:30.743608   78141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-134000
	I0108 18:43:30.795579   78141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63373 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/ingress-addon-legacy-134000/id_rsa Username:docker}
	I0108 18:43:30.890866   78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 18:43:30.890951   78141 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 18:43:30.910641   78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 18:43:30.910715   78141 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0108 18:43:30.931061   78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 18:43:30.931147   78141 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 18:43:30.950987   78141 provision.go:86] duration metric: configureAuth took 362.230814ms
	I0108 18:43:30.951002   78141 ubuntu.go:193] setting minikube options for container-runtime
	I0108 18:43:30.951146   78141 config.go:182] Loaded profile config "ingress-addon-legacy-134000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0108 18:43:30.951226   78141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-134000
	I0108 18:43:31.003150   78141 main.go:141] libmachine: Using SSH client type: native
	I0108 18:43:31.003448   78141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 63373 <nil> <nil>}
	I0108 18:43:31.003467   78141 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 18:43:31.138734   78141 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0108 18:43:31.138761   78141 ubuntu.go:71] root file system type: overlay
	I0108 18:43:31.138862   78141 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 18:43:31.138952   78141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-134000
	I0108 18:43:31.190708   78141 main.go:141] libmachine: Using SSH client type: native
	I0108 18:43:31.191024   78141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 63373 <nil> <nil>}
	I0108 18:43:31.191077   78141 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 18:43:31.333563   78141 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 18:43:31.333658   78141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-134000
	I0108 18:43:31.386114   78141 main.go:141] libmachine: Using SSH client type: native
	I0108 18:43:31.386418   78141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 63373 <nil> <nil>}
	I0108 18:43:31.386431   78141 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 18:43:31.953338   78141 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:22.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-01-09 02:43:31.331772822 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0108 18:43:31.953366   78141 machine.go:91] provisioned docker machine in 1.748279329s
	I0108 18:43:31.953381   78141 client.go:171] LocalClient.Create took 5.776764198s
	I0108 18:43:31.953401   78141 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-134000" took 5.776841772s
	I0108 18:43:31.953409   78141 start.go:300] post-start starting for "ingress-addon-legacy-134000" (driver="docker")
	I0108 18:43:31.953417   78141 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 18:43:31.953482   78141 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 18:43:31.953544   78141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-134000
	I0108 18:43:32.006087   78141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63373 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/ingress-addon-legacy-134000/id_rsa Username:docker}
	I0108 18:43:32.101475   78141 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 18:43:32.105350   78141 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 18:43:32.105376   78141 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 18:43:32.105384   78141 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 18:43:32.105389   78141 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0108 18:43:32.105404   78141 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17866-74927/.minikube/addons for local assets ...
	I0108 18:43:32.105508   78141 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17866-74927/.minikube/files for local assets ...
	I0108 18:43:32.105697   78141 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem -> 753692.pem in /etc/ssl/certs
	I0108 18:43:32.105704   78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem -> /etc/ssl/certs/753692.pem
	I0108 18:43:32.105909   78141 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 18:43:32.113825   78141 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem --> /etc/ssl/certs/753692.pem (1708 bytes)
	I0108 18:43:32.133921   78141 start.go:303] post-start completed in 180.504421ms
	I0108 18:43:32.134521   78141 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-134000
	I0108 18:43:32.185949   78141 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/config.json ...
	I0108 18:43:32.186433   78141 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 18:43:32.186499   78141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-134000
	I0108 18:43:32.237582   78141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63373 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/ingress-addon-legacy-134000/id_rsa Username:docker}
	I0108 18:43:32.329850   78141 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 18:43:32.334616   78141 start.go:128] duration metric: createHost completed in 6.210080897s
	I0108 18:43:32.334633   78141 start.go:83] releasing machines lock for "ingress-addon-legacy-134000", held for 6.210194051s
	I0108 18:43:32.334710   78141 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-134000
	I0108 18:43:32.385908   78141 ssh_runner.go:195] Run: cat /version.json
	I0108 18:43:32.385935   78141 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 18:43:32.385984   78141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-134000
	I0108 18:43:32.386022   78141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-134000
	I0108 18:43:32.466370   78141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63373 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/ingress-addon-legacy-134000/id_rsa Username:docker}
	I0108 18:43:32.466369   78141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63373 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/ingress-addon-legacy-134000/id_rsa Username:docker}
	I0108 18:43:32.666384   78141 ssh_runner.go:195] Run: systemctl --version
	I0108 18:43:32.671356   78141 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 18:43:32.676242   78141 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0108 18:43:32.697676   78141 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0108 18:43:32.697752   78141 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0108 18:43:32.712581   78141 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0108 18:43:32.727276   78141 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 18:43:32.727291   78141 start.go:475] detecting cgroup driver to use...
	I0108 18:43:32.727303   78141 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 18:43:32.727428   78141 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 18:43:32.741822   78141 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0108 18:43:32.750888   78141 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0108 18:43:32.760015   78141 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0108 18:43:32.760073   78141 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0108 18:43:32.769434   78141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 18:43:32.778620   78141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0108 18:43:32.787820   78141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 18:43:32.796876   78141 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 18:43:32.805528   78141 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0108 18:43:32.814746   78141 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 18:43:32.822648   78141 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 18:43:32.830598   78141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 18:43:32.879051   78141 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 18:43:32.965322   78141 start.go:475] detecting cgroup driver to use...
	I0108 18:43:32.965345   78141 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 18:43:32.965414   78141 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 18:43:32.983717   78141 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0108 18:43:32.983784   78141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 18:43:32.994811   78141 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 18:43:33.010822   78141 ssh_runner.go:195] Run: which cri-dockerd
	I0108 18:43:33.015239   78141 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0108 18:43:33.024707   78141 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0108 18:43:33.041304   78141 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 18:43:33.115738   78141 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 18:43:33.205469   78141 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I0108 18:43:33.205562   78141 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0108 18:43:33.222087   78141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 18:43:33.304156   78141 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 18:43:33.537587   78141 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 18:43:33.561066   78141 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 18:43:33.608790   78141 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	I0108 18:43:33.608891   78141 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-134000 dig +short host.docker.internal
	I0108 18:43:33.724911   78141 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0108 18:43:33.725011   78141 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0108 18:43:33.729601   78141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 18:43:33.739780   78141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-134000
	I0108 18:43:33.790962   78141 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0108 18:43:33.791032   78141 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 18:43:33.808202   78141 docker.go:671] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0108 18:43:33.808231   78141 docker.go:677] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0108 18:43:33.808309   78141 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0108 18:43:33.816755   78141 ssh_runner.go:195] Run: which lz4
	I0108 18:43:33.820699   78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0108 18:43:33.820837   78141 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0108 18:43:33.824751   78141 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 18:43:33.824777   78141 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
	I0108 18:43:39.417321   78141 docker.go:635] Took 5.596583 seconds to copy over tarball
	I0108 18:43:39.417400   78141 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 18:43:41.027041   78141 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.609631611s)
	I0108 18:43:41.027056   78141 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 18:43:41.070422   78141 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0108 18:43:41.078651   78141 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0108 18:43:41.093571   78141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 18:43:41.147743   78141 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 18:43:42.220175   78141 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.072417347s)
	I0108 18:43:42.220263   78141 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 18:43:42.239218   78141 docker.go:671] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0108 18:43:42.239231   78141 docker.go:677] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0108 18:43:42.239241   78141 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0108 18:43:42.244344   78141 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 18:43:42.244353   78141 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0108 18:43:42.244552   78141 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0108 18:43:42.244597   78141 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0108 18:43:42.244732   78141 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 18:43:42.244858   78141 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0108 18:43:42.244979   78141 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0108 18:43:42.245242   78141 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0108 18:43:42.249223   78141 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0108 18:43:42.249218   78141 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0108 18:43:42.250854   78141 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0108 18:43:42.251219   78141 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0108 18:43:42.251475   78141 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0108 18:43:42.251522   78141 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0108 18:43:42.251559   78141 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 18:43:42.251536   78141 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 18:43:42.694752   78141 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0108 18:43:42.712890   78141 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0108 18:43:42.712937   78141 docker.go:323] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0108 18:43:42.712992   78141 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0108 18:43:42.730570   78141 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0108 18:43:42.731489   78141 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0108 18:43:42.738490   78141 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0108 18:43:42.749761   78141 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0108 18:43:42.749793   78141 docker.go:323] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0108 18:43:42.749861   78141 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0108 18:43:42.756292   78141 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0108 18:43:42.759615   78141 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0108 18:43:42.759639   78141 docker.go:323] Removing image: registry.k8s.io/coredns:1.6.7
	I0108 18:43:42.759705   78141 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0108 18:43:42.768873   78141 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0108 18:43:42.773494   78141 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0108 18:43:42.779751   78141 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0108 18:43:42.779785   78141 docker.go:323] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0108 18:43:42.779883   78141 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0108 18:43:42.783075   78141 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0108 18:43:42.793062   78141 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0108 18:43:42.793097   78141 docker.go:323] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0108 18:43:42.793177   78141 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0108 18:43:42.801040   78141 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0108 18:43:42.804252   78141 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0108 18:43:42.814754   78141 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0108 18:43:42.822946   78141 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0108 18:43:42.822973   78141 docker.go:323] Removing image: registry.k8s.io/pause:3.2
	I0108 18:43:42.823046   78141 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0108 18:43:42.840360   78141 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0108 18:43:42.935842   78141 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 18:43:42.955494   78141 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0108 18:43:42.955529   78141 docker.go:323] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 18:43:42.955595   78141 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 18:43:42.973329   78141 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0108 18:43:43.202318   78141 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 18:43:43.221864   78141 cache_images.go:92] LoadImages completed in 982.618104ms
	W0108 18:43:43.221914   78141 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20: no such file or directory
	I0108 18:43:43.221994   78141 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0108 18:43:43.269383   78141 cni.go:84] Creating CNI manager for ""
	I0108 18:43:43.269400   78141 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0108 18:43:43.269418   78141 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 18:43:43.269435   78141 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-134000 NodeName:ingress-addon-legacy-134000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0108 18:43:43.269543   78141 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-134000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 18:43:43.269594   78141 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-134000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-134000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 18:43:43.269651   78141 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0108 18:43:43.277836   78141 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 18:43:43.277897   78141 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 18:43:43.286127   78141 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0108 18:43:43.301218   78141 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0108 18:43:43.316864   78141 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I0108 18:43:43.332062   78141 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0108 18:43:43.336149   78141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 18:43:43.346467   78141 certs.go:56] Setting up /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000 for IP: 192.168.49.2
	I0108 18:43:43.346487   78141 certs.go:190] acquiring lock for shared ca certs: {Name:mk44dcbca6ce5cf77b3bf5ce2248b699d6553e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 18:43:43.346673   78141 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.key
	I0108 18:43:43.346741   78141 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.key
	I0108 18:43:43.346808   78141 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/client.key
	I0108 18:43:43.346826   78141 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/client.crt with IP's: []
	I0108 18:43:43.490281   78141 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/client.crt ...
	I0108 18:43:43.490292   78141 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/client.crt: {Name:mk787ba0b3882cc83956a94e4139ac44fd191304 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 18:43:43.490627   78141 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/client.key ...
	I0108 18:43:43.490637   78141 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/client.key: {Name:mk2a2cf0cf19a48cf28a8dc0d02263196e7191e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 18:43:43.490906   78141 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/apiserver.key.dd3b5fb2
	I0108 18:43:43.490922   78141 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 18:43:43.900393   78141 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/apiserver.crt.dd3b5fb2 ...
	I0108 18:43:43.900411   78141 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/apiserver.crt.dd3b5fb2: {Name:mka578b65beb0dab13d354f4e15c4fe7cbd91dc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 18:43:43.900719   78141 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/apiserver.key.dd3b5fb2 ...
	I0108 18:43:43.900729   78141 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/apiserver.key.dd3b5fb2: {Name:mk8e7102f6af35d10ac90492ab01b0faa12e31fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 18:43:43.900946   78141 certs.go:337] copying /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/apiserver.crt
	I0108 18:43:43.901130   78141 certs.go:341] copying /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/apiserver.key
	I0108 18:43:43.901302   78141 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/proxy-client.key
	I0108 18:43:43.901317   78141 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/proxy-client.crt with IP's: []
	I0108 18:43:44.146675   78141 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/proxy-client.crt ...
	I0108 18:43:44.146686   78141 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/proxy-client.crt: {Name:mkc4f31e23d5f93d67fe805f12d900b8c6b58c40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 18:43:44.146957   78141 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/proxy-client.key ...
	I0108 18:43:44.146966   78141 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/proxy-client.key: {Name:mk3abbebc9fc3110f7c2d8b1e682879a566efc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 18:43:44.147173   78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0108 18:43:44.147202   78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0108 18:43:44.147221   78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0108 18:43:44.147238   78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0108 18:43:44.147258   78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 18:43:44.147275   78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 18:43:44.147294   78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 18:43:44.147311   78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 18:43:44.147406   78141 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/75369.pem (1338 bytes)
	W0108 18:43:44.147460   78141 certs.go:433] ignoring /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/75369_empty.pem, impossibly tiny 0 bytes
	I0108 18:43:44.147470   78141 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 18:43:44.147503   78141 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem (1078 bytes)
	I0108 18:43:44.147533   78141 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem (1123 bytes)
	I0108 18:43:44.147563   78141 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/key.pem (1679 bytes)
	I0108 18:43:44.147625   78141 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem (1708 bytes)
	I0108 18:43:44.147663   78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 18:43:44.147683   78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/75369.pem -> /usr/share/ca-certificates/75369.pem
	I0108 18:43:44.147728   78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem -> /usr/share/ca-certificates/753692.pem
	I0108 18:43:44.148209   78141 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 18:43:44.169002   78141 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 18:43:44.188874   78141 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 18:43:44.209076   78141 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 18:43:44.229137   78141 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 18:43:44.249326   78141 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 18:43:44.269546   78141 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 18:43:44.289831   78141 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 18:43:44.310231   78141 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 18:43:44.330656   78141 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/75369.pem --> /usr/share/ca-certificates/75369.pem (1338 bytes)
	I0108 18:43:44.350630   78141 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem --> /usr/share/ca-certificates/753692.pem (1708 bytes)
	I0108 18:43:44.370609   78141 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 18:43:44.385949   78141 ssh_runner.go:195] Run: openssl version
	I0108 18:43:44.391132   78141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 18:43:44.400001   78141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 18:43:44.404071   78141 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  9 02:33 /usr/share/ca-certificates/minikubeCA.pem
	I0108 18:43:44.404116   78141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 18:43:44.410573   78141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 18:43:44.419553   78141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75369.pem && ln -fs /usr/share/ca-certificates/75369.pem /etc/ssl/certs/75369.pem"
	I0108 18:43:44.428642   78141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75369.pem
	I0108 18:43:44.432645   78141 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  9 02:38 /usr/share/ca-certificates/75369.pem
	I0108 18:43:44.432694   78141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75369.pem
	I0108 18:43:44.438891   78141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/75369.pem /etc/ssl/certs/51391683.0"
	I0108 18:43:44.447692   78141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/753692.pem && ln -fs /usr/share/ca-certificates/753692.pem /etc/ssl/certs/753692.pem"
	I0108 18:43:44.456412   78141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/753692.pem
	I0108 18:43:44.460494   78141 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  9 02:38 /usr/share/ca-certificates/753692.pem
	I0108 18:43:44.460540   78141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/753692.pem
	I0108 18:43:44.466904   78141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/753692.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 18:43:44.475759   78141 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 18:43:44.479766   78141 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 18:43:44.479815   78141 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-134000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-134000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 18:43:44.479914   78141 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 18:43:44.498524   78141 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 18:43:44.507164   78141 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 18:43:44.515368   78141 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0108 18:43:44.515422   78141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 18:43:44.523432   78141 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 18:43:44.523462   78141 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 18:43:44.583613   78141 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0108 18:43:44.583666   78141 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 18:43:44.815135   78141 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 18:43:44.815224   78141 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 18:43:44.815301   78141 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 18:43:44.975543   78141 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 18:43:44.976189   78141 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 18:43:44.976233   78141 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 18:43:45.053303   78141 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 18:43:45.075152   78141 out.go:204]   - Generating certificates and keys ...
	I0108 18:43:45.075224   78141 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 18:43:45.075284   78141 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 18:43:45.241370   78141 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 18:43:45.310312   78141 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 18:43:45.435612   78141 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 18:43:45.608363   78141 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 18:43:45.685396   78141 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 18:43:45.685528   78141 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-134000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0108 18:43:45.864412   78141 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 18:43:45.864525   78141 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-134000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0108 18:43:46.009609   78141 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 18:43:46.070466   78141 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 18:43:46.196400   78141 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 18:43:46.196451   78141 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 18:43:46.300675   78141 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 18:43:46.391593   78141 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 18:43:46.663240   78141 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 18:43:46.799271   78141 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 18:43:46.799823   78141 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 18:43:46.843419   78141 out.go:204]   - Booting up control plane ...
	I0108 18:43:46.843592   78141 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 18:43:46.843723   78141 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 18:43:46.843866   78141 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 18:43:46.844016   78141 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 18:43:46.844239   78141 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 18:44:26.809301   78141 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0108 18:44:26.810039   78141 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 18:44:26.810283   78141 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 18:44:31.811819   78141 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 18:44:31.812011   78141 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 18:44:41.813371   78141 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 18:44:41.813583   78141 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 18:45:01.815179   78141 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 18:45:01.815433   78141 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 18:45:41.816902   78141 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 18:45:41.817202   78141 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 18:45:41.817226   78141 kubeadm.go:322] 
	I0108 18:45:41.817271   78141 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0108 18:45:41.817322   78141 kubeadm.go:322] 		timed out waiting for the condition
	I0108 18:45:41.817330   78141 kubeadm.go:322] 
	I0108 18:45:41.817370   78141 kubeadm.go:322] 	This error is likely caused by:
	I0108 18:45:41.817404   78141 kubeadm.go:322] 		- The kubelet is not running
	I0108 18:45:41.817535   78141 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0108 18:45:41.817550   78141 kubeadm.go:322] 
	I0108 18:45:41.817673   78141 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0108 18:45:41.817718   78141 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0108 18:45:41.817751   78141 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0108 18:45:41.817757   78141 kubeadm.go:322] 
	I0108 18:45:41.817906   78141 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0108 18:45:41.818031   78141 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0108 18:45:41.818061   78141 kubeadm.go:322] 
	I0108 18:45:41.818149   78141 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0108 18:45:41.818203   78141 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0108 18:45:41.818285   78141 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0108 18:45:41.818332   78141 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0108 18:45:41.818345   78141 kubeadm.go:322] 
	I0108 18:45:41.819529   78141 kubeadm.go:322] W0109 02:43:44.582840    1701 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0108 18:45:41.819680   78141 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0108 18:45:41.819763   78141 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0108 18:45:41.819889   78141 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I0108 18:45:41.819991   78141 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 18:45:41.820117   78141 kubeadm.go:322] W0109 02:43:46.803454    1701 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0108 18:45:41.820226   78141 kubeadm.go:322] W0109 02:43:46.804233    1701 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0108 18:45:41.820295   78141 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0108 18:45:41.820369   78141 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0108 18:45:41.820453   78141 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-134000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-134000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0109 02:43:44.582840    1701 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0109 02:43:46.803454    1701 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0109 02:43:46.804233    1701 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-134000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-134000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0109 02:43:44.582840    1701 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0109 02:43:46.803454    1701 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0109 02:43:46.804233    1701 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0108 18:45:41.820494   78141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0108 18:45:42.238492   78141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 18:45:42.248876   78141 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0108 18:45:42.248931   78141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 18:45:42.257156   78141 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 18:45:42.257184   78141 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 18:45:42.310128   78141 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0108 18:45:42.310169   78141 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 18:45:42.534617   78141 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 18:45:42.534709   78141 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 18:45:42.534783   78141 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 18:45:42.701434   78141 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 18:45:42.702058   78141 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 18:45:42.702112   78141 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 18:45:42.773597   78141 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 18:45:42.794908   78141 out.go:204]   - Generating certificates and keys ...
	I0108 18:45:42.795000   78141 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 18:45:42.795066   78141 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 18:45:42.795141   78141 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 18:45:42.795197   78141 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0108 18:45:42.795259   78141 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 18:45:42.795300   78141 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0108 18:45:42.795358   78141 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0108 18:45:42.795404   78141 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0108 18:45:42.795459   78141 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 18:45:42.795523   78141 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 18:45:42.795556   78141 kubeadm.go:322] [certs] Using the existing "sa" key
	I0108 18:45:42.795623   78141 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 18:45:42.928986   78141 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 18:45:42.995364   78141 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 18:45:43.054138   78141 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 18:45:43.391042   78141 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 18:45:43.391462   78141 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 18:45:43.412782   78141 out.go:204]   - Booting up control plane ...
	I0108 18:45:43.412922   78141 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 18:45:43.413071   78141 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 18:45:43.413182   78141 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 18:45:43.413320   78141 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 18:45:43.413595   78141 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 18:46:23.399917   78141 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0108 18:46:23.400739   78141 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 18:46:23.400950   78141 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 18:46:28.401863   78141 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 18:46:28.402101   78141 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 18:46:38.403712   78141 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 18:46:38.403947   78141 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 18:46:58.404263   78141 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 18:46:58.404459   78141 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 18:47:38.404949   78141 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 18:47:38.405229   78141 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 18:47:38.405246   78141 kubeadm.go:322] 
	I0108 18:47:38.405311   78141 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0108 18:47:38.405355   78141 kubeadm.go:322] 		timed out waiting for the condition
	I0108 18:47:38.405363   78141 kubeadm.go:322] 
	I0108 18:47:38.405395   78141 kubeadm.go:322] 	This error is likely caused by:
	I0108 18:47:38.405430   78141 kubeadm.go:322] 		- The kubelet is not running
	I0108 18:47:38.405524   78141 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0108 18:47:38.405531   78141 kubeadm.go:322] 
	I0108 18:47:38.405614   78141 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0108 18:47:38.405644   78141 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0108 18:47:38.405669   78141 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0108 18:47:38.405675   78141 kubeadm.go:322] 
	I0108 18:47:38.405756   78141 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0108 18:47:38.405825   78141 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0108 18:47:38.405830   78141 kubeadm.go:322] 
	I0108 18:47:38.405907   78141 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0108 18:47:38.405969   78141 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0108 18:47:38.406061   78141 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0108 18:47:38.406095   78141 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0108 18:47:38.406105   78141 kubeadm.go:322] 
	I0108 18:47:38.407203   78141 kubeadm.go:322] W0109 02:45:42.309787    4705 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0108 18:47:38.407349   78141 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0108 18:47:38.407421   78141 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0108 18:47:38.407547   78141 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I0108 18:47:38.407633   78141 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 18:47:38.407740   78141 kubeadm.go:322] W0109 02:45:43.395871    4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0108 18:47:38.407846   78141 kubeadm.go:322] W0109 02:45:43.396578    4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0108 18:47:38.407919   78141 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0108 18:47:38.408019   78141 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0108 18:47:38.408042   78141 kubeadm.go:406] StartCluster complete in 3m53.930315785s
	I0108 18:47:38.408136   78141 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 18:47:38.425681   78141 logs.go:284] 0 containers: []
	W0108 18:47:38.425695   78141 logs.go:286] No container was found matching "kube-apiserver"
	I0108 18:47:38.425764   78141 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 18:47:38.443299   78141 logs.go:284] 0 containers: []
	W0108 18:47:38.443316   78141 logs.go:286] No container was found matching "etcd"
	I0108 18:47:38.443403   78141 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 18:47:38.461312   78141 logs.go:284] 0 containers: []
	W0108 18:47:38.461325   78141 logs.go:286] No container was found matching "coredns"
	I0108 18:47:38.461405   78141 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 18:47:38.478596   78141 logs.go:284] 0 containers: []
	W0108 18:47:38.478610   78141 logs.go:286] No container was found matching "kube-scheduler"
	I0108 18:47:38.478677   78141 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 18:47:38.496189   78141 logs.go:284] 0 containers: []
	W0108 18:47:38.496204   78141 logs.go:286] No container was found matching "kube-proxy"
	I0108 18:47:38.496272   78141 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 18:47:38.512989   78141 logs.go:284] 0 containers: []
	W0108 18:47:38.513008   78141 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 18:47:38.513097   78141 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 18:47:38.530398   78141 logs.go:284] 0 containers: []
	W0108 18:47:38.530412   78141 logs.go:286] No container was found matching "kindnet"
	I0108 18:47:38.530420   78141 logs.go:123] Gathering logs for kubelet ...
	I0108 18:47:38.530426   78141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 18:47:38.565181   78141 logs.go:123] Gathering logs for dmesg ...
	I0108 18:47:38.565196   78141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 18:47:38.577476   78141 logs.go:123] Gathering logs for describe nodes ...
	I0108 18:47:38.577492   78141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 18:47:38.645012   78141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 18:47:38.645028   78141 logs.go:123] Gathering logs for Docker ...
	I0108 18:47:38.645039   78141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 18:47:38.660129   78141 logs.go:123] Gathering logs for container status ...
	I0108 18:47:38.660145   78141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0108 18:47:38.706684   78141 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0109 02:45:42.309787    4705 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0109 02:45:43.395871    4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0109 02:45:43.396578    4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0108 18:47:38.706710   78141 out.go:239] * 
	* 
	W0108 18:47:38.706751   78141 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0109 02:45:42.309787    4705 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0109 02:45:43.395871    4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0109 02:45:43.396578    4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0109 02:45:42.309787    4705 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0109 02:45:43.395871    4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0109 02:45:43.396578    4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0108 18:47:38.706766   78141 out.go:239] * 
	* 
	W0108 18:47:38.707389   78141 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 18:47:38.790984   78141 out.go:177] 
	W0108 18:47:38.832847   78141 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0109 02:45:42.309787    4705 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0109 02:45:43.395871    4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0109 02:45:43.396578    4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0109 02:45:42.309787    4705 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0109 02:45:43.395871    4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0109 02:45:43.396578    4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0108 18:47:38.832908   78141 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0108 18:47:38.832933   78141 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0108 18:47:38.854021   78141 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-134000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (261.60s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (85.52s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-134000 addons enable ingress --alsologtostderr -v=5
E0108 18:48:34.369553   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-134000 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m25.082544339s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 18:47:39.015835   78295 out.go:296] Setting OutFile to fd 1 ...
	I0108 18:47:39.016833   78295 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 18:47:39.016840   78295 out.go:309] Setting ErrFile to fd 2...
	I0108 18:47:39.016845   78295 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 18:47:39.017036   78295 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17866-74927/.minikube/bin
	I0108 18:47:39.017388   78295 mustload.go:65] Loading cluster: ingress-addon-legacy-134000
	I0108 18:47:39.017706   78295 config.go:182] Loaded profile config "ingress-addon-legacy-134000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0108 18:47:39.017722   78295 addons.go:600] checking whether the cluster is paused
	I0108 18:47:39.017801   78295 config.go:182] Loaded profile config "ingress-addon-legacy-134000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0108 18:47:39.017818   78295 host.go:66] Checking if "ingress-addon-legacy-134000" exists ...
	I0108 18:47:39.018226   78295 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-134000 --format={{.State.Status}}
	I0108 18:47:39.068647   78295 ssh_runner.go:195] Run: systemctl --version
	I0108 18:47:39.068738   78295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-134000
	I0108 18:47:39.120158   78295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63373 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/ingress-addon-legacy-134000/id_rsa Username:docker}
	I0108 18:47:39.210112   78295 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 18:47:39.248247   78295 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0108 18:47:39.269199   78295 config.go:182] Loaded profile config "ingress-addon-legacy-134000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0108 18:47:39.269226   78295 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-134000"
	I0108 18:47:39.269240   78295 addons.go:237] Setting addon ingress=true in "ingress-addon-legacy-134000"
	I0108 18:47:39.269289   78295 host.go:66] Checking if "ingress-addon-legacy-134000" exists ...
	I0108 18:47:39.269881   78295 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-134000 --format={{.State.Status}}
	I0108 18:47:39.343223   78295 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0108 18:47:39.365320   78295 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0108 18:47:39.387114   78295 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I0108 18:47:39.408299   78295 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0108 18:47:39.429515   78295 addons.go:429] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0108 18:47:39.429550   78295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I0108 18:47:39.429679   78295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-134000
	I0108 18:47:39.481955   78295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63373 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/ingress-addon-legacy-134000/id_rsa Username:docker}
	I0108 18:47:39.584294   78295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0108 18:47:39.641010   78295 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:47:39.641045   78295 retry.go:31] will retry after 180.868821ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:47:39.822754   78295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0108 18:47:39.870872   78295 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:47:39.870891   78295 retry.go:31] will retry after 494.455044ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:47:40.366673   78295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0108 18:47:40.422327   78295 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:47:40.422351   78295 retry.go:31] will retry after 635.561047ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:47:41.060186   78295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0108 18:47:41.110868   78295 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:47:41.110893   78295 retry.go:31] will retry after 1.134244815s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:47:42.246601   78295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0108 18:47:42.296509   78295 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:47:42.296527   78295 retry.go:31] will retry after 1.384517759s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:47:43.681979   78295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0108 18:47:43.732000   78295 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:47:43.732021   78295 retry.go:31] will retry after 1.103896039s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:47:44.837096   78295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0108 18:47:44.894151   78295 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:47:44.894170   78295 retry.go:31] will retry after 3.93701879s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:47:48.832854   78295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0108 18:47:48.890668   78295 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:47:48.890692   78295 retry.go:31] will retry after 5.667397261s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:47:54.560272   78295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0108 18:47:54.611160   78295 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:47:54.611178   78295 retry.go:31] will retry after 7.023974936s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:48:01.635352   78295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0108 18:48:01.695962   78295 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:48:01.695982   78295 retry.go:31] will retry after 10.823406637s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:48:12.520662   78295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0108 18:48:12.571243   78295 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:48:12.571260   78295 retry.go:31] will retry after 16.710797283s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:48:29.283633   78295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0108 18:48:29.348959   78295 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:48:29.348982   78295 retry.go:31] will retry after 13.768282882s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:48:43.117529   78295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0108 18:48:43.165063   78295 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:48:43.165081   78295 retry.go:31] will retry after 20.694379757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:49:03.860834   78295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0108 18:49:03.920047   78295 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:49:03.920075   78295 addons.go:473] Verifying addon ingress=true in "ingress-addon-legacy-134000"
	I0108 18:49:03.941504   78295 out.go:177] * Verifying ingress addon...
	I0108 18:49:03.963330   78295 out.go:177] 
	W0108 18:49:03.984503   78295 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-134000" does not exist: client config: context "ingress-addon-legacy-134000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-134000" does not exist: client config: context "ingress-addon-legacy-134000" does not exist]
	W0108 18:49:03.984541   78295 out.go:239] * 
	* 
	W0108 18:49:03.994009   78295 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 18:49:04.015436   78295 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-134000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-134000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "28cee81683d1e302d6b2a81e6422ea06b82263533651dc9148b80a7b169c1f35",
	        "Created": "2024-01-09T02:43:29.294759326Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 52969,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-09T02:43:29.512178707Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/28cee81683d1e302d6b2a81e6422ea06b82263533651dc9148b80a7b169c1f35/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/28cee81683d1e302d6b2a81e6422ea06b82263533651dc9148b80a7b169c1f35/hostname",
	        "HostsPath": "/var/lib/docker/containers/28cee81683d1e302d6b2a81e6422ea06b82263533651dc9148b80a7b169c1f35/hosts",
	        "LogPath": "/var/lib/docker/containers/28cee81683d1e302d6b2a81e6422ea06b82263533651dc9148b80a7b169c1f35/28cee81683d1e302d6b2a81e6422ea06b82263533651dc9148b80a7b169c1f35-json.log",
	        "Name": "/ingress-addon-legacy-134000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-134000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-134000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b8909f4432fab49f50edaffe276c9dfb5519c6cfe6672d22e68c5433d8be11a8-init/diff:/var/lib/docker/overlay2/60277c56cb2e84cbe47fd8ed3c79b85a017889e24b19778a8fc4b14c01478988/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b8909f4432fab49f50edaffe276c9dfb5519c6cfe6672d22e68c5433d8be11a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b8909f4432fab49f50edaffe276c9dfb5519c6cfe6672d22e68c5433d8be11a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b8909f4432fab49f50edaffe276c9dfb5519c6cfe6672d22e68c5433d8be11a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-134000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-134000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-134000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-134000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-134000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c1b900bd0dfbdf86a50e9108075e770001e3ef85568116a5588e8ff5e3c2c3eb",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63373"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63374"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63375"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63376"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63377"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c1b900bd0dfb",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-134000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "28cee81683d1",
	                        "ingress-addon-legacy-134000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "2f4a771560d4acaacce6851d0fb0afe895310c49469911308d4c7977a901e83b",
	                    "EndpointID": "f6783b01d339c9e4f38e113de97116ad3ca3f48da8ed9c086a2539f817ccfb03",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-134000 -n ingress-addon-legacy-134000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-134000 -n ingress-addon-legacy-134000: exit status 6 (383.693546ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 18:49:04.466389   78321 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-134000" does not appear in /Users/jenkins/minikube-integration/17866-74927/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-134000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (85.52s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (101.32s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-134000 addons enable ingress-dns --alsologtostderr -v=5
E0108 18:50:24.129277   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-134000 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m40.880826056s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 18:49:04.533804   78331 out.go:296] Setting OutFile to fd 1 ...
	I0108 18:49:04.534720   78331 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 18:49:04.534727   78331 out.go:309] Setting ErrFile to fd 2...
	I0108 18:49:04.534731   78331 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 18:49:04.534926   78331 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17866-74927/.minikube/bin
	I0108 18:49:04.535266   78331 mustload.go:65] Loading cluster: ingress-addon-legacy-134000
	I0108 18:49:04.535560   78331 config.go:182] Loaded profile config "ingress-addon-legacy-134000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0108 18:49:04.535576   78331 addons.go:600] checking whether the cluster is paused
	I0108 18:49:04.535658   78331 config.go:182] Loaded profile config "ingress-addon-legacy-134000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0108 18:49:04.535675   78331 host.go:66] Checking if "ingress-addon-legacy-134000" exists ...
	I0108 18:49:04.536063   78331 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-134000 --format={{.State.Status}}
	I0108 18:49:04.586391   78331 ssh_runner.go:195] Run: systemctl --version
	I0108 18:49:04.586486   78331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-134000
	I0108 18:49:04.638110   78331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63373 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/ingress-addon-legacy-134000/id_rsa Username:docker}
	I0108 18:49:04.729280   78331 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 18:49:04.768671   78331 out.go:177] * ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0108 18:49:04.790018   78331 config.go:182] Loaded profile config "ingress-addon-legacy-134000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0108 18:49:04.790045   78331 addons.go:69] Setting ingress-dns=true in profile "ingress-addon-legacy-134000"
	I0108 18:49:04.790059   78331 addons.go:237] Setting addon ingress-dns=true in "ingress-addon-legacy-134000"
	I0108 18:49:04.790111   78331 host.go:66] Checking if "ingress-addon-legacy-134000" exists ...
	I0108 18:49:04.790710   78331 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-134000 --format={{.State.Status}}
	I0108 18:49:04.864754   78331 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0108 18:49:04.886648   78331 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0108 18:49:04.907940   78331 addons.go:429] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0108 18:49:04.907975   78331 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0108 18:49:04.908113   78331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-134000
	I0108 18:49:04.960308   78331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63373 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/ingress-addon-legacy-134000/id_rsa Username:docker}
	I0108 18:49:05.062064   78331 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0108 18:49:05.111074   78331 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:49:05.111102   78331 retry.go:31] will retry after 271.53201ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:49:05.383362   78331 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0108 18:49:05.494755   78331 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:49:05.494778   78331 retry.go:31] will retry after 206.802999ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:49:05.703879   78331 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0108 18:49:05.765261   78331 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:49:05.765282   78331 retry.go:31] will retry after 536.136197ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:49:06.303623   78331 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0108 18:49:06.352195   78331 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:49:06.352221   78331 retry.go:31] will retry after 819.376677ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:49:07.171774   78331 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0108 18:49:07.227223   78331 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:49:07.227250   78331 retry.go:31] will retry after 795.736713ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:49:08.023464   78331 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0108 18:49:08.083613   78331 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:49:08.083627   78331 retry.go:31] will retry after 2.25350986s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:49:10.339428   78331 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0108 18:49:10.398266   78331 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:49:10.398285   78331 retry.go:31] will retry after 2.582457632s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:49:12.981756   78331 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0108 18:49:13.046090   78331 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:49:13.046109   78331 retry.go:31] will retry after 3.205388919s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:49:16.252179   78331 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0108 18:49:16.309848   78331 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:49:16.309867   78331 retry.go:31] will retry after 7.354376486s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:49:23.664985   78331 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0108 18:49:23.715958   78331 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:49:23.715976   78331 retry.go:31] will retry after 12.455536483s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:49:36.172086   78331 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0108 18:49:36.229877   78331 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:49:36.229913   78331 retry.go:31] will retry after 8.745615414s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:49:44.976392   78331 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0108 18:49:45.039887   78331 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:49:45.039904   78331 retry.go:31] will retry after 16.737826302s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:50:01.778593   78331 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0108 18:50:01.839270   78331 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:50:01.839290   78331 retry.go:31] will retry after 43.373143947s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:50:45.213994   78331 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0108 18:50:45.262553   78331 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 18:50:45.284264   78331 out.go:177] 
	W0108 18:50:45.305200   78331 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0108 18:50:45.305225   78331 out.go:239] * 
	* 
	W0108 18:50:45.313996   78331 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 18:50:45.334982   78331 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-134000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-134000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "28cee81683d1e302d6b2a81e6422ea06b82263533651dc9148b80a7b169c1f35",
	        "Created": "2024-01-09T02:43:29.294759326Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 52969,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-09T02:43:29.512178707Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/28cee81683d1e302d6b2a81e6422ea06b82263533651dc9148b80a7b169c1f35/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/28cee81683d1e302d6b2a81e6422ea06b82263533651dc9148b80a7b169c1f35/hostname",
	        "HostsPath": "/var/lib/docker/containers/28cee81683d1e302d6b2a81e6422ea06b82263533651dc9148b80a7b169c1f35/hosts",
	        "LogPath": "/var/lib/docker/containers/28cee81683d1e302d6b2a81e6422ea06b82263533651dc9148b80a7b169c1f35/28cee81683d1e302d6b2a81e6422ea06b82263533651dc9148b80a7b169c1f35-json.log",
	        "Name": "/ingress-addon-legacy-134000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-134000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-134000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b8909f4432fab49f50edaffe276c9dfb5519c6cfe6672d22e68c5433d8be11a8-init/diff:/var/lib/docker/overlay2/60277c56cb2e84cbe47fd8ed3c79b85a017889e24b19778a8fc4b14c01478988/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b8909f4432fab49f50edaffe276c9dfb5519c6cfe6672d22e68c5433d8be11a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b8909f4432fab49f50edaffe276c9dfb5519c6cfe6672d22e68c5433d8be11a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b8909f4432fab49f50edaffe276c9dfb5519c6cfe6672d22e68c5433d8be11a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-134000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-134000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-134000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-134000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-134000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c1b900bd0dfbdf86a50e9108075e770001e3ef85568116a5588e8ff5e3c2c3eb",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63373"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63374"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63375"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63376"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63377"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c1b900bd0dfb",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-134000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "28cee81683d1",
	                        "ingress-addon-legacy-134000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "2f4a771560d4acaacce6851d0fb0afe895310c49469911308d4c7977a901e83b",
	                    "EndpointID": "f6783b01d339c9e4f38e113de97116ad3ca3f48da8ed9c086a2539f817ccfb03",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-134000 -n ingress-addon-legacy-134000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-134000 -n ingress-addon-legacy-134000: exit status 6 (382.424551ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 18:50:45.782404   78352 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-134000" does not appear in /Users/jenkins/minikube-integration/17866-74927/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-134000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (101.32s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.44s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:201: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-134000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-134000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "28cee81683d1e302d6b2a81e6422ea06b82263533651dc9148b80a7b169c1f35",
	        "Created": "2024-01-09T02:43:29.294759326Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 52969,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-09T02:43:29.512178707Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/28cee81683d1e302d6b2a81e6422ea06b82263533651dc9148b80a7b169c1f35/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/28cee81683d1e302d6b2a81e6422ea06b82263533651dc9148b80a7b169c1f35/hostname",
	        "HostsPath": "/var/lib/docker/containers/28cee81683d1e302d6b2a81e6422ea06b82263533651dc9148b80a7b169c1f35/hosts",
	        "LogPath": "/var/lib/docker/containers/28cee81683d1e302d6b2a81e6422ea06b82263533651dc9148b80a7b169c1f35/28cee81683d1e302d6b2a81e6422ea06b82263533651dc9148b80a7b169c1f35-json.log",
	        "Name": "/ingress-addon-legacy-134000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-134000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-134000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b8909f4432fab49f50edaffe276c9dfb5519c6cfe6672d22e68c5433d8be11a8-init/diff:/var/lib/docker/overlay2/60277c56cb2e84cbe47fd8ed3c79b85a017889e24b19778a8fc4b14c01478988/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b8909f4432fab49f50edaffe276c9dfb5519c6cfe6672d22e68c5433d8be11a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b8909f4432fab49f50edaffe276c9dfb5519c6cfe6672d22e68c5433d8be11a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b8909f4432fab49f50edaffe276c9dfb5519c6cfe6672d22e68c5433d8be11a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-134000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-134000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-134000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-134000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-134000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c1b900bd0dfbdf86a50e9108075e770001e3ef85568116a5588e8ff5e3c2c3eb",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63373"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63374"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63375"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63376"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63377"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c1b900bd0dfb",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-134000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "28cee81683d1",
	                        "ingress-addon-legacy-134000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "2f4a771560d4acaacce6851d0fb0afe895310c49469911308d4c7977a901e83b",
	                    "EndpointID": "f6783b01d339c9e4f38e113de97116ad3ca3f48da8ed9c086a2539f817ccfb03",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-134000 -n ingress-addon-legacy-134000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-134000 -n ingress-addon-legacy-134000: exit status 6 (383.729992ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 18:50:46.219365   78364 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-134000" does not appear in /Users/jenkins/minikube-integration/17866-74927/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-134000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.44s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (65.68s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.2108175553.exe start -p running-upgrade-232000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:133: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.2108175553.exe start -p running-upgrade-232000 --memory=2200 --vm-driver=docker : exit status 70 (50.468447228s)

                                                
                                                
-- stdout --
	! [running-upgrade-232000] minikube v1.9.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig2313117641
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (12 available), Memory=2200MB (5933MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-01-09 03:10:03.142108805 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "running-upgrade-232000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (12 available), Memory=2200MB (5933MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-01-09 03:10:17.117496958 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p running-upgrade-232000", then "minikube start -p running-upgrade-232000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.32.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.32.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 4.48 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 8.00 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 16.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 24.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 32.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 40.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 44.47 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 52.50 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 60.25 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 66.58 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 75.02 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 85.09 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 94.11 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 102.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 111.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 118.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 128.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 135.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 144.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 152.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 160.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 169.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 177.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 186.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 193.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 200.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 207.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 216.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 225.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 235.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 245.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 254.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 263.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 272.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 281.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 285.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 293.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 299.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 307.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 312.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 319.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 325.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 331.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 344.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 352.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 361.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 371.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 379.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 386.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 392.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 401.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd6
4.tar.lz4: 408.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 413.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 420.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 430.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 442.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 448.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 456.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 465.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 474.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 481.27 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 492.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 504.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 512.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-a
md64.tar.lz4: 520.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 528.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 536.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-01-09 03:10:17.117496958 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.2108175553.exe start -p running-upgrade-232000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:133: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.2108175553.exe start -p running-upgrade-232000 --memory=2200 --vm-driver=docker : exit status 70 (3.970493249s)

                                                
                                                
-- stdout --
	* [running-upgrade-232000] minikube v1.9.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig1877947823
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-232000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
E0108 19:10:24.078000   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.2108175553.exe start -p running-upgrade-232000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:133: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.2108175553.exe start -p running-upgrade-232000 --memory=2200 --vm-driver=docker : exit status 70 (4.141521363s)

                                                
                                                
-- stdout --
	* [running-upgrade-232000] minikube v1.9.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig3871150261
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-232000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:139: legacy v1.9.0 start failed: exit status 70
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2024-01-08 19:10:30.196168 -0800 PST m=+2340.329715404
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-232000
helpers_test.go:235: (dbg) docker inspect running-upgrade-232000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b36de53c904ad999a79a83281ba9283dad4c2ef26558e9065b2d37837216b1b3",
	        "Created": "2024-01-09T03:10:11.727335733Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 195263,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-09T03:10:11.916778226Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/b36de53c904ad999a79a83281ba9283dad4c2ef26558e9065b2d37837216b1b3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b36de53c904ad999a79a83281ba9283dad4c2ef26558e9065b2d37837216b1b3/hostname",
	        "HostsPath": "/var/lib/docker/containers/b36de53c904ad999a79a83281ba9283dad4c2ef26558e9065b2d37837216b1b3/hosts",
	        "LogPath": "/var/lib/docker/containers/b36de53c904ad999a79a83281ba9283dad4c2ef26558e9065b2d37837216b1b3/b36de53c904ad999a79a83281ba9283dad4c2ef26558e9065b2d37837216b1b3-json.log",
	        "Name": "/running-upgrade-232000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-232000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b8bdbeeec37af26ae19100f39704b668eab931d93fc94d0b3538ffc049da15f9-init/diff:/var/lib/docker/overlay2/3a6290af1f9fb00a167faa49909a8c5ab47438b8c78b73bc73069ba1a8dc9df5/diff:/var/lib/docker/overlay2/ec2f3de792bcb0fe1a578a749fb0e0627e42025ade20dc76660b59a69331712f/diff:/var/lib/docker/overlay2/327a876bdb94c462457ec170672bc17753f2a6bcd4c629c13d9fee2e4b0d0f5f/diff:/var/lib/docker/overlay2/7da6105b2a85fd0072ee2938b24fec478fe62881146e2ade5b30f53e36ef5442/diff:/var/lib/docker/overlay2/224e6d58c682206b52f22cec4bf8069c38a1adc99ddc098643e156487a1a483c/diff:/var/lib/docker/overlay2/cd5bc72add223e92068d5f54ba89258d0a795e49fa3e1a1ead8a40d49ea66b10/diff:/var/lib/docker/overlay2/d9c68b5ebb2703a3dd61fca199b2de6997cd89decf05b9b3b0875308650ca009/diff:/var/lib/docker/overlay2/5f7ddf4b0a05f9d640ea75a17ced3d1827200ba230efa895044a71b42088c4aa/diff:/var/lib/docker/overlay2/81f74dd343a22db97bc46012b61dc4cdc49486c195ba34bf2a4e5b4d1ec62d9d/diff:/var/lib/docker/overlay2/2934ed
71f19dbd83587218f6f0a407ea57758149bdfebb7f500ea6e59734a17a/diff:/var/lib/docker/overlay2/18d80c6d16684bccdfb9068bf694fc3d0ab2612293c6c92fd8a2876368c74e07/diff:/var/lib/docker/overlay2/977962fd7a5a4b475269e3e7fdbf6f1e1b349b5ece0ce6ba059252668e838545/diff:/var/lib/docker/overlay2/1f468b4d8b733cec7aa70a0128bb91b51ae359d0466095da2971a0763d4313be/diff:/var/lib/docker/overlay2/4d17689f340896451f73daf938bf76c3eb5684beff10710afa8eaf29de3e871d/diff:/var/lib/docker/overlay2/6d35290a238ea19a190e50167ac1f5cacb4440903f8124a419db0f000708c68e/diff:/var/lib/docker/overlay2/418a9b09b84c93accc660ec34ede73050503924dfecdc4923c5ceedc8fccf224/diff:/var/lib/docker/overlay2/9ef48ea3e759c63e32e849b1bad501c7d587982611f2d2c435404f51ac5c389a/diff:/var/lib/docker/overlay2/4c26904d7773ed9456b18068ad648cda4206f2dc68bde11eb0183e3ef59c996e/diff:/var/lib/docker/overlay2/433bace7b63d8cd8311577d2ea0240f82c746323c0b0fc5d86564aabab99b79e/diff:/var/lib/docker/overlay2/b3bb26e7399d8cdabb735371247852fe18aa7d55968ec7235d91b03ea0ce1002/diff:/var/lib/d
ocker/overlay2/7b262596826f20b97d430eceb9b5dd9815629c01caa944ea2c55fa69def45d14/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b8bdbeeec37af26ae19100f39704b668eab931d93fc94d0b3538ffc049da15f9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b8bdbeeec37af26ae19100f39704b668eab931d93fc94d0b3538ffc049da15f9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b8bdbeeec37af26ae19100f39704b668eab931d93fc94d0b3538ffc049da15f9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-232000",
	                "Source": "/var/lib/docker/volumes/running-upgrade-232000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-232000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-232000",
	                "name.minikube.sigs.k8s.io": "running-upgrade-232000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "be4dd7628e58ca71da7623e807aa562b14e9479353844166f5d74633c47b9732",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "64675"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "64676"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "64677"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/be4dd7628e58",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "e7af64ef557533101e3f287087d838a1c8aa0449f3ec9add9bf8466b94b40b60",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "NetworkID": "186365f5f9476a341bd6e61888a3e950b0d05a8881bb17afb5c533a1be684d09",
	                    "EndpointID": "e7af64ef557533101e3f287087d838a1c8aa0449f3ec9add9bf8466b94b40b60",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-232000 -n running-upgrade-232000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-232000 -n running-upgrade-232000: exit status 6 (364.132976ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 19:10:30.602665   83992 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-232000" does not appear in /Users/jenkins/minikube-integration/17866-74927/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-232000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-232000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-232000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-232000: (2.137597153s)
--- FAIL: TestRunningBinaryUpgrade (65.68s)

                                                
                                    
x
+
TestKubernetesUpgrade (319.52s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-658000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:235: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-658000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m12.779773607s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-658000] minikube v1.32.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17866
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17866-74927/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-658000 in cluster kubernetes-upgrade-658000
	* Pulling base image v0.0.42-1704759386-17866 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 19:11:14.660711   84354 out.go:296] Setting OutFile to fd 1 ...
	I0108 19:11:14.661015   84354 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 19:11:14.661021   84354 out.go:309] Setting ErrFile to fd 2...
	I0108 19:11:14.661025   84354 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 19:11:14.661220   84354 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17866-74927/.minikube/bin
	I0108 19:11:14.662654   84354 out.go:303] Setting JSON to false
	I0108 19:11:14.685128   84354 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":36646,"bootTime":1704733228,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0108 19:11:14.685235   84354 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0108 19:11:14.706760   84354 out.go:177] * [kubernetes-upgrade-658000] minikube v1.32.0 on Darwin 14.2.1
	I0108 19:11:14.727501   84354 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 19:11:14.727616   84354 notify.go:220] Checking for updates...
	I0108 19:11:14.769420   84354 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17866-74927/kubeconfig
	I0108 19:11:14.790444   84354 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 19:11:14.811307   84354 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 19:11:14.832465   84354 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube
	I0108 19:11:14.853636   84354 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 19:11:14.875148   84354 config.go:182] Loaded profile config "cert-expiration-871000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 19:11:14.875321   84354 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 19:11:14.932188   84354 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0108 19:11:14.932338   84354 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 19:11:15.031950   84354 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:false NGoroutines:68 SystemTime:2024-01-09 03:11:15.022292382 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0108 19:11:15.053175   84354 out.go:177] * Using the docker driver based on user configuration
	I0108 19:11:15.074070   84354 start.go:298] selected driver: docker
	I0108 19:11:15.074113   84354 start.go:902] validating driver "docker" against <nil>
	I0108 19:11:15.074128   84354 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 19:11:15.078540   84354 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 19:11:15.178452   84354 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:false NGoroutines:68 SystemTime:2024-01-09 03:11:15.168733115 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0108 19:11:15.178636   84354 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0108 19:11:15.178812   84354 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0108 19:11:15.200019   84354 out.go:177] * Using Docker Desktop driver with root privileges
	I0108 19:11:15.221090   84354 cni.go:84] Creating CNI manager for ""
	I0108 19:11:15.221118   84354 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0108 19:11:15.221137   84354 start_flags.go:321] config:
	{Name:kubernetes-upgrade-658000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-658000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 19:11:15.264030   84354 out.go:177] * Starting control plane node kubernetes-upgrade-658000 in cluster kubernetes-upgrade-658000
	I0108 19:11:15.285006   84354 cache.go:121] Beginning downloading kic base image for docker with docker
	I0108 19:11:15.305969   84354 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0108 19:11:15.348065   84354 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0108 19:11:15.348155   84354 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0108 19:11:15.348161   84354 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0108 19:11:15.348190   84354 cache.go:56] Caching tarball of preloaded images
	I0108 19:11:15.348408   84354 preload.go:174] Found /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 19:11:15.348427   84354 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0108 19:11:15.348538   84354 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/config.json ...
	I0108 19:11:15.348575   84354 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/config.json: {Name:mk0c2f63f4f4979e2129a1c9ce878b4022765df2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:11:15.401303   84354 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0108 19:11:15.401354   84354 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0108 19:11:15.401377   84354 cache.go:194] Successfully downloaded all kic artifacts
	I0108 19:11:15.401432   84354 start.go:365] acquiring machines lock for kubernetes-upgrade-658000: {Name:mke48780d135ebd99d14a0e8964881eb6f37a1d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 19:11:15.401756   84354 start.go:369] acquired machines lock for "kubernetes-upgrade-658000" in 310.048µs
	I0108 19:11:15.401782   84354 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-658000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-658000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0108 19:11:15.401855   84354 start.go:125] createHost starting for "" (driver="docker")
	I0108 19:11:15.424763   84354 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0108 19:11:15.425113   84354 start.go:159] libmachine.API.Create for "kubernetes-upgrade-658000" (driver="docker")
	I0108 19:11:15.425157   84354 client.go:168] LocalClient.Create starting
	I0108 19:11:15.425312   84354 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem
	I0108 19:11:15.425401   84354 main.go:141] libmachine: Decoding PEM data...
	I0108 19:11:15.425433   84354 main.go:141] libmachine: Parsing certificate...
	I0108 19:11:15.425539   84354 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem
	I0108 19:11:15.425607   84354 main.go:141] libmachine: Decoding PEM data...
	I0108 19:11:15.425627   84354 main.go:141] libmachine: Parsing certificate...
	I0108 19:11:15.426469   84354 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-658000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0108 19:11:15.478944   84354 cli_runner.go:211] docker network inspect kubernetes-upgrade-658000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0108 19:11:15.479063   84354 network_create.go:281] running [docker network inspect kubernetes-upgrade-658000] to gather additional debugging logs...
	I0108 19:11:15.479084   84354 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-658000
	W0108 19:11:15.529006   84354 cli_runner.go:211] docker network inspect kubernetes-upgrade-658000 returned with exit code 1
	I0108 19:11:15.529045   84354 network_create.go:284] error running [docker network inspect kubernetes-upgrade-658000]: docker network inspect kubernetes-upgrade-658000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-658000 not found
	I0108 19:11:15.529056   84354 network_create.go:286] output of [docker network inspect kubernetes-upgrade-658000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-658000 not found
	
	** /stderr **
	I0108 19:11:15.529189   84354 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 19:11:15.581163   84354 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0108 19:11:15.581531   84354 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021d13a0}
	I0108 19:11:15.581549   84354 network_create.go:124] attempt to create docker network kubernetes-upgrade-658000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I0108 19:11:15.581623   84354 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-658000 kubernetes-upgrade-658000
	W0108 19:11:15.631727   84354 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-658000 kubernetes-upgrade-658000 returned with exit code 1
	W0108 19:11:15.631771   84354 network_create.go:149] failed to create docker network kubernetes-upgrade-658000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-658000 kubernetes-upgrade-658000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0108 19:11:15.631785   84354 network_create.go:116] failed to create docker network kubernetes-upgrade-658000 192.168.58.0/24, will retry: subnet is taken
	I0108 19:11:15.633359   84354 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0108 19:11:15.633708   84354 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022b5080}
	I0108 19:11:15.633722   84354 network_create.go:124] attempt to create docker network kubernetes-upgrade-658000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0108 19:11:15.633790   84354 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-658000 kubernetes-upgrade-658000
	W0108 19:11:15.685054   84354 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-658000 kubernetes-upgrade-658000 returned with exit code 1
	W0108 19:11:15.685089   84354 network_create.go:149] failed to create docker network kubernetes-upgrade-658000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-658000 kubernetes-upgrade-658000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0108 19:11:15.685108   84354 network_create.go:116] failed to create docker network kubernetes-upgrade-658000 192.168.67.0/24, will retry: subnet is taken
	I0108 19:11:15.686452   84354 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0108 19:11:15.686803   84354 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00242a1e0}
	I0108 19:11:15.686817   84354 network_create.go:124] attempt to create docker network kubernetes-upgrade-658000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0108 19:11:15.686892   84354 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-658000 kubernetes-upgrade-658000
	I0108 19:11:15.774386   84354 network_create.go:108] docker network kubernetes-upgrade-658000 192.168.76.0/24 created
	I0108 19:11:15.774436   84354 kic.go:121] calculated static IP "192.168.76.2" for the "kubernetes-upgrade-658000" container
	I0108 19:11:15.774561   84354 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 19:11:15.825553   84354 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-658000 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-658000 --label created_by.minikube.sigs.k8s.io=true
	I0108 19:11:15.876910   84354 oci.go:103] Successfully created a docker volume kubernetes-upgrade-658000
	I0108 19:11:15.877023   84354 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-658000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-658000 --entrypoint /usr/bin/test -v kubernetes-upgrade-658000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0108 19:11:16.303980   84354 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-658000
	I0108 19:11:16.304017   84354 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0108 19:11:16.304032   84354 kic.go:194] Starting extracting preloaded images to volume ...
	I0108 19:11:16.304139   84354 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-658000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0108 19:11:18.292312   84354 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-658000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (1.988165704s)
	I0108 19:11:18.292346   84354 kic.go:203] duration metric: took 1.988365 seconds to extract preloaded images to volume
	I0108 19:11:18.292443   84354 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 19:11:18.394163   84354 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-658000 --name kubernetes-upgrade-658000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-658000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-658000 --network kubernetes-upgrade-658000 --ip 192.168.76.2 --volume kubernetes-upgrade-658000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0108 19:11:18.667090   84354 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-658000 --format={{.State.Running}}
	I0108 19:11:18.722185   84354 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-658000 --format={{.State.Status}}
	I0108 19:11:18.778799   84354 cli_runner.go:164] Run: docker exec kubernetes-upgrade-658000 stat /var/lib/dpkg/alternatives/iptables
	I0108 19:11:18.873719   84354 oci.go:144] the created container "kubernetes-upgrade-658000" has a running status.
	I0108 19:11:18.873762   84354 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/kubernetes-upgrade-658000/id_rsa...
	I0108 19:11:19.065895   84354 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/kubernetes-upgrade-658000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 19:11:19.127705   84354 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-658000 --format={{.State.Status}}
	I0108 19:11:19.186310   84354 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 19:11:19.186336   84354 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-658000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 19:11:19.283365   84354 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-658000 --format={{.State.Status}}
	I0108 19:11:19.333910   84354 machine.go:88] provisioning docker machine ...
	I0108 19:11:19.333959   84354 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-658000"
	I0108 19:11:19.334071   84354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-658000
	I0108 19:11:19.386117   84354 main.go:141] libmachine: Using SSH client type: native
	I0108 19:11:19.386459   84354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 64786 <nil> <nil>}
	I0108 19:11:19.386473   84354 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-658000 && echo "kubernetes-upgrade-658000" | sudo tee /etc/hostname
	I0108 19:11:19.531666   84354 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-658000
	
	I0108 19:11:19.531784   84354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-658000
	I0108 19:11:19.583842   84354 main.go:141] libmachine: Using SSH client type: native
	I0108 19:11:19.584129   84354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 64786 <nil> <nil>}
	I0108 19:11:19.584144   84354 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-658000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-658000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-658000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 19:11:19.717474   84354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 19:11:19.717505   84354 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17866-74927/.minikube CaCertPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17866-74927/.minikube}
	I0108 19:11:19.717539   84354 ubuntu.go:177] setting up certificates
	I0108 19:11:19.717555   84354 provision.go:83] configureAuth start
	I0108 19:11:19.717671   84354 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-658000
	I0108 19:11:19.769538   84354 provision.go:138] copyHostCerts
	I0108 19:11:19.769646   84354 exec_runner.go:144] found /Users/jenkins/minikube-integration/17866-74927/.minikube/key.pem, removing ...
	I0108 19:11:19.769660   84354 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17866-74927/.minikube/key.pem
	I0108 19:11:19.769804   84354 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17866-74927/.minikube/key.pem (1679 bytes)
	I0108 19:11:19.770019   84354 exec_runner.go:144] found /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.pem, removing ...
	I0108 19:11:19.770025   84354 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.pem
	I0108 19:11:19.770121   84354 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.pem (1078 bytes)
	I0108 19:11:19.770288   84354 exec_runner.go:144] found /Users/jenkins/minikube-integration/17866-74927/.minikube/cert.pem, removing ...
	I0108 19:11:19.770294   84354 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17866-74927/.minikube/cert.pem
	I0108 19:11:19.770378   84354 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17866-74927/.minikube/cert.pem (1123 bytes)
	I0108 19:11:19.770525   84354 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-658000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-658000]
	I0108 19:11:19.883718   84354 provision.go:172] copyRemoteCerts
	I0108 19:11:19.883775   84354 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 19:11:19.883847   84354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-658000
	I0108 19:11:19.935062   84354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64786 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/kubernetes-upgrade-658000/id_rsa Username:docker}
	I0108 19:11:20.030232   84354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 19:11:20.050229   84354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0108 19:11:20.070203   84354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 19:11:20.090311   84354 provision.go:86] duration metric: configureAuth took 372.748577ms
	I0108 19:11:20.090325   84354 ubuntu.go:193] setting minikube options for container-runtime
	I0108 19:11:20.090450   84354 config.go:182] Loaded profile config "kubernetes-upgrade-658000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0108 19:11:20.090517   84354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-658000
	I0108 19:11:20.142618   84354 main.go:141] libmachine: Using SSH client type: native
	I0108 19:11:20.142931   84354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 64786 <nil> <nil>}
	I0108 19:11:20.142950   84354 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 19:11:20.276123   84354 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0108 19:11:20.276138   84354 ubuntu.go:71] root file system type: overlay
	I0108 19:11:20.276244   84354 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 19:11:20.276328   84354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-658000
	I0108 19:11:20.327544   84354 main.go:141] libmachine: Using SSH client type: native
	I0108 19:11:20.327875   84354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 64786 <nil> <nil>}
	I0108 19:11:20.327923   84354 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 19:11:20.470392   84354 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 19:11:20.470505   84354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-658000
	I0108 19:11:20.522409   84354 main.go:141] libmachine: Using SSH client type: native
	I0108 19:11:20.522713   84354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 64786 <nil> <nil>}
	I0108 19:11:20.522731   84354 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 19:11:21.100861   84354 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:22.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-01-09 03:11:20.468630926 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0108 19:11:21.100896   84354 machine.go:91] provisioned docker machine in 1.767008564s
	I0108 19:11:21.100907   84354 client.go:171] LocalClient.Create took 5.675889685s
	I0108 19:11:21.100923   84354 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-658000" took 5.675959626s
	I0108 19:11:21.100931   84354 start.go:300] post-start starting for "kubernetes-upgrade-658000" (driver="docker")
	I0108 19:11:21.100939   84354 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 19:11:21.101018   84354 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 19:11:21.101075   84354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-658000
	I0108 19:11:21.155393   84354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64786 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/kubernetes-upgrade-658000/id_rsa Username:docker}
	I0108 19:11:21.250832   84354 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 19:11:21.254729   84354 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 19:11:21.254753   84354 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 19:11:21.254760   84354 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 19:11:21.254768   84354 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0108 19:11:21.254778   84354 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17866-74927/.minikube/addons for local assets ...
	I0108 19:11:21.254885   84354 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17866-74927/.minikube/files for local assets ...
	I0108 19:11:21.255078   84354 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem -> 753692.pem in /etc/ssl/certs
	I0108 19:11:21.255291   84354 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 19:11:21.263279   84354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem --> /etc/ssl/certs/753692.pem (1708 bytes)
	I0108 19:11:21.283223   84354 start.go:303] post-start completed in 182.287257ms
	I0108 19:11:21.283829   84354 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-658000
	I0108 19:11:21.335854   84354 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/config.json ...
	I0108 19:11:21.336323   84354 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 19:11:21.336388   84354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-658000
	I0108 19:11:21.387812   84354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64786 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/kubernetes-upgrade-658000/id_rsa Username:docker}
	I0108 19:11:21.479842   84354 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 19:11:21.484750   84354 start.go:128] duration metric: createHost completed in 6.083037926s
	I0108 19:11:21.484771   84354 start.go:83] releasing machines lock for "kubernetes-upgrade-658000", held for 6.083163219s
	I0108 19:11:21.484881   84354 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-658000
	I0108 19:11:21.535910   84354 ssh_runner.go:195] Run: cat /version.json
	I0108 19:11:21.535912   84354 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 19:11:21.535996   84354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-658000
	I0108 19:11:21.536004   84354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-658000
	I0108 19:11:21.589758   84354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64786 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/kubernetes-upgrade-658000/id_rsa Username:docker}
	I0108 19:11:21.590027   84354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64786 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/kubernetes-upgrade-658000/id_rsa Username:docker}
	I0108 19:11:21.799196   84354 ssh_runner.go:195] Run: systemctl --version
	I0108 19:11:21.803757   84354 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 19:11:21.808730   84354 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0108 19:11:21.830808   84354 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0108 19:11:21.830881   84354 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0108 19:11:21.845657   84354 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0108 19:11:21.860480   84354 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 19:11:21.860498   84354 start.go:475] detecting cgroup driver to use...
	I0108 19:11:21.860510   84354 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 19:11:21.860608   84354 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 19:11:21.875264   84354 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0108 19:11:21.884468   84354 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0108 19:11:21.893917   84354 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0108 19:11:21.893985   84354 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0108 19:11:21.903136   84354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 19:11:21.912220   84354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0108 19:11:21.921573   84354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 19:11:21.930807   84354 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 19:11:21.939502   84354 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0108 19:11:21.948958   84354 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 19:11:21.957051   84354 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 19:11:21.965196   84354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 19:11:22.017439   84354 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 19:11:22.097265   84354 start.go:475] detecting cgroup driver to use...
	I0108 19:11:22.097287   84354 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 19:11:22.097360   84354 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 19:11:22.113882   84354 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0108 19:11:22.113966   84354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 19:11:22.125203   84354 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 19:11:22.141721   84354 ssh_runner.go:195] Run: which cri-dockerd
	I0108 19:11:22.146063   84354 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0108 19:11:22.154802   84354 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0108 19:11:22.172453   84354 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 19:11:22.247173   84354 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 19:11:22.345161   84354 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I0108 19:11:22.345242   84354 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0108 19:11:22.361272   84354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 19:11:22.445053   84354 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 19:11:22.692425   84354 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 19:11:22.718168   84354 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 19:11:22.788866   84354 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	I0108 19:11:22.788989   84354 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-658000 dig +short host.docker.internal
	I0108 19:11:22.906053   84354 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0108 19:11:22.906196   84354 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0108 19:11:22.911065   84354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 19:11:22.921608   84354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-658000
	I0108 19:11:22.973413   84354 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0108 19:11:22.973497   84354 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 19:11:22.992401   84354 docker.go:671] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0108 19:11:22.992417   84354 docker.go:677] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0108 19:11:22.992471   84354 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0108 19:11:23.000950   84354 ssh_runner.go:195] Run: which lz4
	I0108 19:11:23.005055   84354 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0108 19:11:23.008889   84354 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 19:11:23.008917   84354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0108 19:11:28.058205   84354 docker.go:635] Took 5.053320 seconds to copy over tarball
	I0108 19:11:28.058279   84354 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 19:11:29.598530   84354 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.540264127s)
	I0108 19:11:29.598544   84354 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 19:11:29.635158   84354 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0108 19:11:29.643497   84354 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0108 19:11:29.658999   84354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 19:11:29.715349   84354 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 19:11:30.172646   84354 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 19:11:30.192428   84354 docker.go:671] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0108 19:11:30.192441   84354 docker.go:677] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0108 19:11:30.192447   84354 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0108 19:11:30.198161   84354 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0108 19:11:30.198155   84354 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0108 19:11:30.198227   84354 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0108 19:11:30.198368   84354 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0108 19:11:30.198371   84354 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 19:11:30.198427   84354 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 19:11:30.198559   84354 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0108 19:11:30.198591   84354 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0108 19:11:30.203641   84354 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 19:11:30.203855   84354 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0108 19:11:30.203900   84354 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0108 19:11:30.203925   84354 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0108 19:11:30.203991   84354 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0108 19:11:30.204222   84354 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 19:11:30.205220   84354 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0108 19:11:30.205226   84354 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0108 19:11:30.660616   84354 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0108 19:11:30.679086   84354 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0108 19:11:30.679135   84354 docker.go:323] Removing image: registry.k8s.io/pause:3.1
	I0108 19:11:30.679199   84354 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0108 19:11:30.696195   84354 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0108 19:11:30.696494   84354 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0108 19:11:30.711934   84354 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0108 19:11:30.713174   84354 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0108 19:11:30.713209   84354 docker.go:323] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0108 19:11:30.713264   84354 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0108 19:11:30.729580   84354 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0108 19:11:30.731478   84354 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0108 19:11:30.731511   84354 docker.go:323] Removing image: registry.k8s.io/coredns:1.6.2
	I0108 19:11:30.731581   84354 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0108 19:11:30.735052   84354 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0108 19:11:30.750203   84354 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0108 19:11:30.750230   84354 docker.go:323] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0108 19:11:30.750296   84354 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0108 19:11:30.751241   84354 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0108 19:11:30.771749   84354 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0108 19:11:30.795825   84354 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 19:11:30.796857   84354 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 19:11:30.817806   84354 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0108 19:11:30.817838   84354 docker.go:323] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 19:11:30.817903   84354 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 19:11:30.836722   84354 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0108 19:11:30.846004   84354 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0108 19:11:30.863787   84354 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0108 19:11:30.863813   84354 docker.go:323] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0108 19:11:30.863875   84354 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0108 19:11:30.882252   84354 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0108 19:11:30.946244   84354 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0108 19:11:30.965065   84354 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0108 19:11:30.965094   84354 docker.go:323] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0108 19:11:30.965156   84354 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0108 19:11:30.982423   84354 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0108 19:11:30.982493   84354 cache_images.go:92] LoadImages completed in 790.053535ms
	W0108 19:11:30.982536   84354 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	I0108 19:11:30.982606   84354 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0108 19:11:31.046651   84354 cni.go:84] Creating CNI manager for ""
	I0108 19:11:31.046669   84354 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0108 19:11:31.046686   84354 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 19:11:31.046702   84354 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-658000 NodeName:kubernetes-upgrade-658000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0108 19:11:31.046805   84354 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-658000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-658000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 19:11:31.046856   84354 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-658000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-658000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 19:11:31.046918   84354 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0108 19:11:31.055571   84354 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 19:11:31.055632   84354 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 19:11:31.063912   84354 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I0108 19:11:31.079075   84354 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 19:11:31.094440   84354 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2180 bytes)
	I0108 19:11:31.110224   84354 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0108 19:11:31.114407   84354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 19:11:31.124832   84354 certs.go:56] Setting up /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000 for IP: 192.168.76.2
	I0108 19:11:31.124860   84354 certs.go:190] acquiring lock for shared ca certs: {Name:mk44dcbca6ce5cf77b3bf5ce2248b699d6553e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:11:31.125041   84354 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.key
	I0108 19:11:31.125114   84354 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.key
	I0108 19:11:31.125160   84354 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/client.key
	I0108 19:11:31.125173   84354 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/client.crt with IP's: []
	I0108 19:11:31.306265   84354 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/client.crt ...
	I0108 19:11:31.306278   84354 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/client.crt: {Name:mk542a015a5db18b2733d117c2f2c907fadd7615 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:11:31.306643   84354 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/client.key ...
	I0108 19:11:31.306659   84354 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/client.key: {Name:mk918f7589f66874bc11ccdd85e7d278ec9af197 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:11:31.306879   84354 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/apiserver.key.31bdca25
	I0108 19:11:31.306895   84354 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 19:11:31.421638   84354 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/apiserver.crt.31bdca25 ...
	I0108 19:11:31.421655   84354 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/apiserver.crt.31bdca25: {Name:mk5160b0e99f4ae5a35edd359fd346e9e70ea209 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:11:31.421993   84354 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/apiserver.key.31bdca25 ...
	I0108 19:11:31.422007   84354 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/apiserver.key.31bdca25: {Name:mk983b74547aa95dde1346659668e1670fa91849 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:11:31.422219   84354 certs.go:337] copying /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/apiserver.crt
	I0108 19:11:31.422420   84354 certs.go:341] copying /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/apiserver.key
	I0108 19:11:31.422589   84354 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/proxy-client.key
	I0108 19:11:31.422604   84354 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/proxy-client.crt with IP's: []
	I0108 19:11:31.651358   84354 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/proxy-client.crt ...
	I0108 19:11:31.651392   84354 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/proxy-client.crt: {Name:mkbf9aa7c53d2ee00fe2e713b0c3934e719a9b9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:11:31.651701   84354 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/proxy-client.key ...
	I0108 19:11:31.651713   84354 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/proxy-client.key: {Name:mkf9cc3e33650c6c7edd27a24ca7d2f2a6321da3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:11:31.652136   84354 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/75369.pem (1338 bytes)
	W0108 19:11:31.652188   84354 certs.go:433] ignoring /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/75369_empty.pem, impossibly tiny 0 bytes
	I0108 19:11:31.652201   84354 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 19:11:31.652234   84354 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem (1078 bytes)
	I0108 19:11:31.652268   84354 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem (1123 bytes)
	I0108 19:11:31.652298   84354 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/key.pem (1679 bytes)
	I0108 19:11:31.652362   84354 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem (1708 bytes)
	I0108 19:11:31.652857   84354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 19:11:31.673995   84354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 19:11:31.695061   84354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 19:11:31.715898   84354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 19:11:31.736536   84354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 19:11:31.757367   84354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 19:11:31.777970   84354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 19:11:31.798341   84354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 19:11:31.818766   84354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 19:11:31.839796   84354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/75369.pem --> /usr/share/ca-certificates/75369.pem (1338 bytes)
	I0108 19:11:31.860123   84354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem --> /usr/share/ca-certificates/753692.pem (1708 bytes)
	I0108 19:11:31.880544   84354 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 19:11:31.896093   84354 ssh_runner.go:195] Run: openssl version
	I0108 19:11:31.902151   84354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 19:11:31.911440   84354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 19:11:31.915512   84354 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  9 02:33 /usr/share/ca-certificates/minikubeCA.pem
	I0108 19:11:31.915568   84354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 19:11:31.921916   84354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 19:11:31.931085   84354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75369.pem && ln -fs /usr/share/ca-certificates/75369.pem /etc/ssl/certs/75369.pem"
	I0108 19:11:31.940023   84354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75369.pem
	I0108 19:11:31.944128   84354 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  9 02:38 /usr/share/ca-certificates/75369.pem
	I0108 19:11:31.944199   84354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75369.pem
	I0108 19:11:31.951085   84354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/75369.pem /etc/ssl/certs/51391683.0"
	I0108 19:11:31.960600   84354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/753692.pem && ln -fs /usr/share/ca-certificates/753692.pem /etc/ssl/certs/753692.pem"
	I0108 19:11:31.969889   84354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/753692.pem
	I0108 19:11:31.973975   84354 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  9 02:38 /usr/share/ca-certificates/753692.pem
	I0108 19:11:31.974021   84354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/753692.pem
	I0108 19:11:31.980305   84354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/753692.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 19:11:31.989154   84354 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 19:11:31.993331   84354 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 19:11:31.993381   84354 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-658000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-658000 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 19:11:31.993504   84354 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 19:11:32.011989   84354 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 19:11:32.020734   84354 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 19:11:32.028993   84354 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0108 19:11:32.029048   84354 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 19:11:32.037400   84354 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 19:11:32.037424   84354 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 19:11:32.083459   84354 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0108 19:11:32.083506   84354 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 19:11:32.371078   84354 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 19:11:32.371165   84354 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 19:11:32.371244   84354 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 19:11:32.551166   84354 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 19:11:32.551935   84354 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 19:11:32.557853   84354 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0108 19:11:32.628255   84354 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 19:11:32.673240   84354 out.go:204]   - Generating certificates and keys ...
	I0108 19:11:32.673338   84354 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 19:11:32.673397   84354 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 19:11:32.988510   84354 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 19:11:33.210673   84354 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 19:11:33.400065   84354 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 19:11:33.560020   84354 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 19:11:33.684309   84354 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 19:11:33.684586   84354 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-658000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0108 19:11:33.760309   84354 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 19:11:33.760440   84354 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-658000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0108 19:11:33.818255   84354 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 19:11:34.021774   84354 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 19:11:34.340227   84354 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 19:11:34.340289   84354 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 19:11:34.399786   84354 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 19:11:34.501883   84354 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 19:11:34.604082   84354 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 19:11:34.841427   84354 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 19:11:34.841942   84354 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 19:11:34.864078   84354 out.go:204]   - Booting up control plane ...
	I0108 19:11:34.864187   84354 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 19:11:34.864305   84354 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 19:11:34.864410   84354 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 19:11:34.864515   84354 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 19:11:34.864714   84354 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 19:12:14.850595   84354 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0108 19:12:14.851072   84354 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:12:14.851267   84354 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:12:19.851608   84354 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:12:19.851787   84354 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:12:29.852310   84354 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:12:29.852476   84354 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:12:49.852826   84354 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:12:49.852978   84354 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:13:29.853168   84354 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:13:29.853333   84354 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:13:29.853349   84354 kubeadm.go:322] 
	I0108 19:13:29.853397   84354 kubeadm.go:322] Unfortunately, an error has occurred:
	I0108 19:13:29.853506   84354 kubeadm.go:322] 	timed out waiting for the condition
	I0108 19:13:29.853516   84354 kubeadm.go:322] 
	I0108 19:13:29.853543   84354 kubeadm.go:322] This error is likely caused by:
	I0108 19:13:29.853567   84354 kubeadm.go:322] 	- The kubelet is not running
	I0108 19:13:29.853648   84354 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0108 19:13:29.853658   84354 kubeadm.go:322] 
	I0108 19:13:29.853740   84354 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0108 19:13:29.853772   84354 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0108 19:13:29.853797   84354 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0108 19:13:29.853800   84354 kubeadm.go:322] 
	I0108 19:13:29.853894   84354 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0108 19:13:29.853956   84354 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0108 19:13:29.854035   84354 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0108 19:13:29.854077   84354 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0108 19:13:29.854143   84354 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0108 19:13:29.854174   84354 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0108 19:13:29.855151   84354 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0108 19:13:29.855261   84354 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0108 19:13:29.855436   84354 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0108 19:13:29.855543   84354 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 19:13:29.855669   84354 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0108 19:13:29.855767   84354 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0108 19:13:29.855876   84354 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-658000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-658000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-658000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-658000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0108 19:13:29.855931   84354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0108 19:13:30.278007   84354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 19:13:30.290472   84354 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0108 19:13:30.290540   84354 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 19:13:30.299847   84354 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 19:13:30.299873   84354 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 19:13:30.357009   84354 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0108 19:13:30.357054   84354 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 19:13:30.619098   84354 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 19:13:30.619179   84354 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 19:13:30.619263   84354 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 19:13:30.805839   84354 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 19:13:30.807000   84354 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 19:13:30.820296   84354 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0108 19:13:30.890449   84354 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 19:13:30.914670   84354 out.go:204]   - Generating certificates and keys ...
	I0108 19:13:30.914728   84354 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 19:13:30.914796   84354 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 19:13:30.914856   84354 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 19:13:30.914909   84354 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0108 19:13:30.914964   84354 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 19:13:30.915002   84354 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0108 19:13:30.915062   84354 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0108 19:13:30.915110   84354 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0108 19:13:30.915164   84354 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 19:13:30.915222   84354 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 19:13:30.915250   84354 kubeadm.go:322] [certs] Using the existing "sa" key
	I0108 19:13:30.915294   84354 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 19:13:31.057999   84354 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 19:13:31.150246   84354 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 19:13:31.265449   84354 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 19:13:31.539651   84354 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 19:13:31.539917   84354 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 19:13:31.561474   84354 out.go:204]   - Booting up control plane ...
	I0108 19:13:31.561742   84354 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 19:13:31.561827   84354 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 19:13:31.561888   84354 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 19:13:31.561951   84354 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 19:13:31.562086   84354 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 19:14:11.547595   84354 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0108 19:14:11.548484   84354 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:14:11.548710   84354 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:14:16.550249   84354 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:14:16.550456   84354 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:14:26.550865   84354 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:14:26.551071   84354 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:14:46.551171   84354 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:14:46.551336   84354 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:15:26.551278   84354 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:15:26.551491   84354 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:15:26.551518   84354 kubeadm.go:322] 
	I0108 19:15:26.551557   84354 kubeadm.go:322] Unfortunately, an error has occurred:
	I0108 19:15:26.551589   84354 kubeadm.go:322] 	timed out waiting for the condition
	I0108 19:15:26.551594   84354 kubeadm.go:322] 
	I0108 19:15:26.551621   84354 kubeadm.go:322] This error is likely caused by:
	I0108 19:15:26.551651   84354 kubeadm.go:322] 	- The kubelet is not running
	I0108 19:15:26.551756   84354 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0108 19:15:26.551763   84354 kubeadm.go:322] 
	I0108 19:15:26.551872   84354 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0108 19:15:26.551913   84354 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0108 19:15:26.551943   84354 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0108 19:15:26.551948   84354 kubeadm.go:322] 
	I0108 19:15:26.552028   84354 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0108 19:15:26.552105   84354 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0108 19:15:26.552185   84354 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0108 19:15:26.552239   84354 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0108 19:15:26.552301   84354 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0108 19:15:26.552329   84354 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0108 19:15:26.553178   84354 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0108 19:15:26.553242   84354 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0108 19:15:26.553364   84354 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0108 19:15:26.553505   84354 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 19:15:26.553574   84354 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0108 19:15:26.553641   84354 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0108 19:15:26.553685   84354 kubeadm.go:406] StartCluster complete in 3m54.566349489s
	I0108 19:15:26.553770   84354 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:15:26.573932   84354 logs.go:284] 0 containers: []
	W0108 19:15:26.573951   84354 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:15:26.574027   84354 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:15:26.597540   84354 logs.go:284] 0 containers: []
	W0108 19:15:26.597563   84354 logs.go:286] No container was found matching "etcd"
	I0108 19:15:26.597634   84354 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:15:26.617655   84354 logs.go:284] 0 containers: []
	W0108 19:15:26.617669   84354 logs.go:286] No container was found matching "coredns"
	I0108 19:15:26.617751   84354 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:15:26.643801   84354 logs.go:284] 0 containers: []
	W0108 19:15:26.643816   84354 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:15:26.643895   84354 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:15:26.685307   84354 logs.go:284] 0 containers: []
	W0108 19:15:26.685334   84354 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:15:26.685477   84354 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:15:26.712139   84354 logs.go:284] 0 containers: []
	W0108 19:15:26.712153   84354 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:15:26.712224   84354 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:15:26.729880   84354 logs.go:284] 0 containers: []
	W0108 19:15:26.729895   84354 logs.go:286] No container was found matching "kindnet"
	I0108 19:15:26.729910   84354 logs.go:123] Gathering logs for kubelet ...
	I0108 19:15:26.729917   84354 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:15:26.766476   84354 logs.go:123] Gathering logs for dmesg ...
	I0108 19:15:26.766491   84354 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:15:26.788499   84354 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:15:26.788565   84354 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:15:26.928065   84354 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:15:26.928081   84354 logs.go:123] Gathering logs for Docker ...
	I0108 19:15:26.928092   84354 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:15:26.958809   84354 logs.go:123] Gathering logs for container status ...
	I0108 19:15:26.958840   84354 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0108 19:15:27.073078   84354 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0108 19:15:27.073118   84354 out.go:239] * 
	* 
	W0108 19:15:27.073174   84354 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0108 19:15:27.073198   84354 out.go:239] * 
	* 
	W0108 19:15:27.074334   84354 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 19:15:27.163461   84354 out.go:177] 
	W0108 19:15:27.235785   84354 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0108 19:15:27.235850   84354 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0108 19:15:27.235874   84354 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0108 19:15:27.311597   84354 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:237: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-658000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-658000
version_upgrade_test.go:240: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-658000: (1.628882922s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-658000 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-658000 status --format={{.Host}}: exit status 7 (122.067859ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-658000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-658000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker : (26.609419689s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-658000 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-658000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-658000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (455.119665ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-658000] minikube v1.32.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17866
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17866-74927/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-658000
	    minikube start -p kubernetes-upgrade-658000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6580002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-658000 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-658000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:288: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-658000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker : (31.769753098s)
version_upgrade_test.go:292: *** TestKubernetesUpgrade FAILED at 2024-01-08 19:16:28.065132 -0800 PST m=+2698.207892487
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-658000
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-658000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "29bc9888f459e22d5d4f9fb8ef73f7ea2bdeeb154092d7f46db247dec763990f",
	        "Created": "2024-01-09T03:11:18.444313642Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 226659,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-09T03:15:30.560851911Z",
	            "FinishedAt": "2024-01-09T03:15:27.879968283Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/29bc9888f459e22d5d4f9fb8ef73f7ea2bdeeb154092d7f46db247dec763990f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/29bc9888f459e22d5d4f9fb8ef73f7ea2bdeeb154092d7f46db247dec763990f/hostname",
	        "HostsPath": "/var/lib/docker/containers/29bc9888f459e22d5d4f9fb8ef73f7ea2bdeeb154092d7f46db247dec763990f/hosts",
	        "LogPath": "/var/lib/docker/containers/29bc9888f459e22d5d4f9fb8ef73f7ea2bdeeb154092d7f46db247dec763990f/29bc9888f459e22d5d4f9fb8ef73f7ea2bdeeb154092d7f46db247dec763990f-json.log",
	        "Name": "/kubernetes-upgrade-658000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-658000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-658000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1898161cf46aebd789bfe34f037c24e845556c0b297d773f3c4c8b647c04ed2c-init/diff:/var/lib/docker/overlay2/60277c56cb2e84cbe47fd8ed3c79b85a017889e24b19778a8fc4b14c01478988/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1898161cf46aebd789bfe34f037c24e845556c0b297d773f3c4c8b647c04ed2c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1898161cf46aebd789bfe34f037c24e845556c0b297d773f3c4c8b647c04ed2c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1898161cf46aebd789bfe34f037c24e845556c0b297d773f3c4c8b647c04ed2c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-658000",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-658000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-658000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-658000",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-658000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ae213498780c3e0ee325d8d3b07e2f7f63cb4d79f0285075c2d0d4cea127fd79",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "65038"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "65039"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "65040"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "65041"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "65037"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ae213498780c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-658000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "29bc9888f459",
	                        "kubernetes-upgrade-658000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "facd48f11f7a18e25dcacb57c4e5f1f2afe5bc8450f55c6170253333924e4117",
	                    "EndpointID": "88de27893f9703dd399ffc4efdb5c1956a4357ec5f97d8df2edb4391019cc261",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-658000 -n kubernetes-upgrade-658000
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-658000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-658000 logs -n 25: (2.533352437s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-871000         | cert-expiration-871000    | jenkins | v1.32.0 | 08 Jan 24 19:12 PST | 08 Jan 24 19:12 PST |
	|         | --memory=2048                     |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h           |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-871000         | cert-expiration-871000    | jenkins | v1.32.0 | 08 Jan 24 19:12 PST | 08 Jan 24 19:12 PST |
	| delete  | -p stopped-upgrade-702000         | stopped-upgrade-702000    | jenkins | v1.32.0 | 08 Jan 24 19:13 PST | 08 Jan 24 19:13 PST |
	| start   | -p pause-622000 --memory=2048     | pause-622000              | jenkins | v1.32.0 | 08 Jan 24 19:13 PST | 08 Jan 24 19:13 PST |
	|         | --install-addons=false            |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker        |                           |         |         |                     |                     |
	| start   | -p pause-622000                   | pause-622000              | jenkins | v1.32.0 | 08 Jan 24 19:13 PST | 08 Jan 24 19:14 PST |
	|         | --alsologtostderr -v=1            |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	| pause   | -p pause-622000                   | pause-622000              | jenkins | v1.32.0 | 08 Jan 24 19:14 PST | 08 Jan 24 19:14 PST |
	|         | --alsologtostderr -v=5            |                           |         |         |                     |                     |
	| unpause | -p pause-622000                   | pause-622000              | jenkins | v1.32.0 | 08 Jan 24 19:14 PST | 08 Jan 24 19:14 PST |
	|         | --alsologtostderr -v=5            |                           |         |         |                     |                     |
	| pause   | -p pause-622000                   | pause-622000              | jenkins | v1.32.0 | 08 Jan 24 19:14 PST | 08 Jan 24 19:14 PST |
	|         | --alsologtostderr -v=5            |                           |         |         |                     |                     |
	| delete  | -p pause-622000                   | pause-622000              | jenkins | v1.32.0 | 08 Jan 24 19:14 PST | 08 Jan 24 19:14 PST |
	|         | --alsologtostderr -v=5            |                           |         |         |                     |                     |
	| delete  | -p pause-622000                   | pause-622000              | jenkins | v1.32.0 | 08 Jan 24 19:14 PST | 08 Jan 24 19:14 PST |
	| start   | -p NoKubernetes-680000            | NoKubernetes-680000       | jenkins | v1.32.0 | 08 Jan 24 19:14 PST |                     |
	|         | --no-kubernetes                   |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20         |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-680000            | NoKubernetes-680000       | jenkins | v1.32.0 | 08 Jan 24 19:14 PST | 08 Jan 24 19:15 PST |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-680000            | NoKubernetes-680000       | jenkins | v1.32.0 | 08 Jan 24 19:15 PST | 08 Jan 24 19:15 PST |
	|         | --no-kubernetes                   |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-680000            | NoKubernetes-680000       | jenkins | v1.32.0 | 08 Jan 24 19:15 PST | 08 Jan 24 19:15 PST |
	| start   | -p NoKubernetes-680000            | NoKubernetes-680000       | jenkins | v1.32.0 | 08 Jan 24 19:15 PST | 08 Jan 24 19:15 PST |
	|         | --no-kubernetes                   |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-658000      | kubernetes-upgrade-658000 | jenkins | v1.32.0 | 08 Jan 24 19:15 PST | 08 Jan 24 19:15 PST |
	| start   | -p kubernetes-upgrade-658000      | kubernetes-upgrade-658000 | jenkins | v1.32.0 | 08 Jan 24 19:15 PST | 08 Jan 24 19:15 PST |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-680000 sudo       | NoKubernetes-680000       | jenkins | v1.32.0 | 08 Jan 24 19:15 PST |                     |
	|         | systemctl is-active --quiet       |                           |         |         |                     |                     |
	|         | service kubelet                   |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-680000            | NoKubernetes-680000       | jenkins | v1.32.0 | 08 Jan 24 19:15 PST | 08 Jan 24 19:15 PST |
	| start   | -p NoKubernetes-680000            | NoKubernetes-680000       | jenkins | v1.32.0 | 08 Jan 24 19:15 PST | 08 Jan 24 19:15 PST |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-680000 sudo       | NoKubernetes-680000       | jenkins | v1.32.0 | 08 Jan 24 19:15 PST |                     |
	|         | systemctl is-active --quiet       |                           |         |         |                     |                     |
	|         | service kubelet                   |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-680000            | NoKubernetes-680000       | jenkins | v1.32.0 | 08 Jan 24 19:15 PST | 08 Jan 24 19:15 PST |
	| start   | -p auto-798000 --memory=3072      | auto-798000               | jenkins | v1.32.0 | 08 Jan 24 19:15 PST |                     |
	|         | --alsologtostderr --wait=true     |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-658000      | kubernetes-upgrade-658000 | jenkins | v1.32.0 | 08 Jan 24 19:15 PST |                     |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-658000      | kubernetes-upgrade-658000 | jenkins | v1.32.0 | 08 Jan 24 19:15 PST | 08 Jan 24 19:16 PST |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|---------|-----------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 19:15:56
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 19:15:56.349081   85911 out.go:296] Setting OutFile to fd 1 ...
	I0108 19:15:56.349399   85911 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 19:15:56.349406   85911 out.go:309] Setting ErrFile to fd 2...
	I0108 19:15:56.349410   85911 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 19:15:56.349611   85911 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17866-74927/.minikube/bin
	I0108 19:15:56.351269   85911 out.go:303] Setting JSON to false
	I0108 19:15:56.375749   85911 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":36928,"bootTime":1704733228,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0108 19:15:56.375874   85911 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0108 19:15:56.397271   85911 out.go:177] * [kubernetes-upgrade-658000] minikube v1.32.0 on Darwin 14.2.1
	I0108 19:15:56.460259   85911 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 19:15:56.439393   85911 notify.go:220] Checking for updates...
	I0108 19:15:56.503284   85911 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17866-74927/kubeconfig
	I0108 19:15:56.545476   85911 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 19:15:56.587207   85911 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 19:15:56.608290   85911 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube
	I0108 19:15:56.631451   85911 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 19:15:56.652603   85911 config.go:182] Loaded profile config "kubernetes-upgrade-658000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0108 19:15:56.653078   85911 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 19:15:56.712319   85911 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0108 19:15:56.712494   85911 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 19:15:56.840607   85911 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:69 OomKillDisable:false NGoroutines:73 SystemTime:2024-01-09 03:15:56.825721603 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0108 19:15:56.862421   85911 out.go:177] * Using the docker driver based on existing profile
	I0108 19:15:56.883118   85911 start.go:298] selected driver: docker
	I0108 19:15:56.883143   85911 start.go:902] validating driver "docker" against &{Name:kubernetes-upgrade-658000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-658000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 19:15:56.883209   85911 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 19:15:56.886443   85911 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 19:15:57.006175   85911 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:69 OomKillDisable:false NGoroutines:73 SystemTime:2024-01-09 03:15:56.99642987 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfi
ned name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manag
es Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/do
cker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0108 19:15:57.006444   85911 cni.go:84] Creating CNI manager for ""
	I0108 19:15:57.006460   85911 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0108 19:15:57.006471   85911 start_flags.go:321] config:
	{Name:kubernetes-upgrade-658000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-658000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 19:15:57.049141   85911 out.go:177] * Starting control plane node kubernetes-upgrade-658000 in cluster kubernetes-upgrade-658000
	I0108 19:15:57.070088   85911 cache.go:121] Beginning downloading kic base image for docker with docker
	I0108 19:15:57.092887   85911 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0108 19:15:57.135171   85911 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0108 19:15:57.135234   85911 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0108 19:15:57.135269   85911 cache.go:56] Caching tarball of preloaded images
	I0108 19:15:57.135264   85911 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0108 19:15:57.135494   85911 preload.go:174] Found /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 19:15:57.135515   85911 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0108 19:15:57.135701   85911 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/config.json ...
	I0108 19:15:57.189992   85911 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0108 19:15:57.190020   85911 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0108 19:15:57.190046   85911 cache.go:194] Successfully downloaded all kic artifacts
	I0108 19:15:57.190094   85911 start.go:365] acquiring machines lock for kubernetes-upgrade-658000: {Name:mke48780d135ebd99d14a0e8964881eb6f37a1d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 19:15:57.190211   85911 start.go:369] acquired machines lock for "kubernetes-upgrade-658000" in 80.354µs
	I0108 19:15:57.190242   85911 start.go:96] Skipping create...Using existing machine configuration
	I0108 19:15:57.190263   85911 fix.go:54] fixHost starting: 
	I0108 19:15:57.190542   85911 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-658000 --format={{.State.Status}}
	I0108 19:15:57.248709   85911 fix.go:102] recreateIfNeeded on kubernetes-upgrade-658000: state=Running err=<nil>
	W0108 19:15:57.248747   85911 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 19:15:57.270318   85911 out.go:177] * Updating the running docker "kubernetes-upgrade-658000" container ...
	I0108 19:15:56.334178   85790 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0108 19:15:56.334281   85790 cli_runner.go:164] Run: docker exec -t auto-798000 dig +short host.docker.internal
	I0108 19:15:56.656598   85790 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0108 19:15:56.656702   85790 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0108 19:15:56.661233   85790 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 19:15:56.672231   85790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" auto-798000
	I0108 19:15:56.727627   85790 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0108 19:15:56.727756   85790 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 19:15:56.750894   85790 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0108 19:15:56.750920   85790 docker.go:601] Images already preloaded, skipping extraction
	I0108 19:15:56.751009   85790 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 19:15:56.779359   85790 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0108 19:15:56.779386   85790 cache_images.go:84] Images are preloaded, skipping loading
	I0108 19:15:56.779645   85790 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0108 19:15:56.853272   85790 cni.go:84] Creating CNI manager for ""
	I0108 19:15:56.853290   85790 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0108 19:15:56.853307   85790 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 19:15:56.853325   85790 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-798000 NodeName:auto-798000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 19:15:56.853441   85790 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "auto-798000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 19:15:56.853506   85790 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=auto-798000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:auto-798000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 19:15:56.853564   85790 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 19:15:56.862395   85790 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 19:15:56.862462   85790 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 19:15:56.870850   85790 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0108 19:15:56.886289   85790 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 19:15:56.902400   85790 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0108 19:15:56.919105   85790 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0108 19:15:56.923448   85790 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 19:15:56.934970   85790 certs.go:56] Setting up /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000 for IP: 192.168.67.2
	I0108 19:15:56.935036   85790 certs.go:190] acquiring lock for shared ca certs: {Name:mk44dcbca6ce5cf77b3bf5ce2248b699d6553e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:15:56.935274   85790 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.key
	I0108 19:15:56.935491   85790 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.key
	I0108 19:15:56.935574   85790 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/client.key
	I0108 19:15:56.935594   85790 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/client.crt with IP's: []
	I0108 19:15:57.113456   85790 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/client.crt ...
	I0108 19:15:57.113469   85790 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/client.crt: {Name:mkdedd30ae7a54065e9171b47bf35fc5b89868af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:15:57.114722   85790 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/client.key ...
	I0108 19:15:57.114732   85790 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/client.key: {Name:mkfb6fbdeef13ff47f583e54103af9d8fcd1e3f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:15:57.135346   85790 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/apiserver.key.c7fa3a9e
	I0108 19:15:57.135397   85790 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 19:15:57.231452   85790 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/apiserver.crt.c7fa3a9e ...
	I0108 19:15:57.231470   85790 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/apiserver.crt.c7fa3a9e: {Name:mk7dc025d6f2106dafb9978f2df0c642a83eff62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:15:57.231838   85790 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/apiserver.key.c7fa3a9e ...
	I0108 19:15:57.231848   85790 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/apiserver.key.c7fa3a9e: {Name:mkc774fa5f69f86302bcb863fe614185f50fdb8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:15:57.232098   85790 certs.go:337] copying /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/apiserver.crt
	I0108 19:15:57.232296   85790 certs.go:341] copying /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/apiserver.key
	I0108 19:15:57.232487   85790 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/proxy-client.key
	I0108 19:15:57.232501   85790 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/proxy-client.crt with IP's: []
	I0108 19:15:57.358139   85790 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/proxy-client.crt ...
	I0108 19:15:57.358155   85790 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/proxy-client.crt: {Name:mk0cff9e535339a92535c33baf7480d344e06cc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:15:57.358499   85790 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/proxy-client.key ...
	I0108 19:15:57.358507   85790 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/proxy-client.key: {Name:mk40573b97d7ff4410bfb2e99dc61a96a6dc32c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:15:57.359097   85790 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/75369.pem (1338 bytes)
	W0108 19:15:57.359149   85790 certs.go:433] ignoring /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/75369_empty.pem, impossibly tiny 0 bytes
	I0108 19:15:57.359159   85790 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 19:15:57.359197   85790 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem (1078 bytes)
	I0108 19:15:57.359235   85790 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem (1123 bytes)
	I0108 19:15:57.359265   85790 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/key.pem (1679 bytes)
	I0108 19:15:57.359332   85790 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem (1708 bytes)
	I0108 19:15:57.359890   85790 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 19:15:57.382648   85790 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 19:15:57.403567   85790 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 19:15:57.429263   85790 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 19:15:57.459675   85790 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 19:15:57.480446   85790 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 19:15:57.503635   85790 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 19:15:57.536356   85790 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 19:15:57.557865   85790 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem --> /usr/share/ca-certificates/753692.pem (1708 bytes)
	I0108 19:15:57.577994   85790 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 19:15:57.598776   85790 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/75369.pem --> /usr/share/ca-certificates/75369.pem (1338 bytes)
	I0108 19:15:57.630044   85790 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 19:15:57.652788   85790 ssh_runner.go:195] Run: openssl version
	I0108 19:15:57.659714   85790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/753692.pem && ln -fs /usr/share/ca-certificates/753692.pem /etc/ssl/certs/753692.pem"
	I0108 19:15:57.670423   85790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/753692.pem
	I0108 19:15:57.675750   85790 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  9 02:38 /usr/share/ca-certificates/753692.pem
	I0108 19:15:57.675812   85790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/753692.pem
	I0108 19:15:57.684007   85790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/753692.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 19:15:57.693402   85790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 19:15:57.703511   85790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 19:15:57.709194   85790 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  9 02:33 /usr/share/ca-certificates/minikubeCA.pem
	I0108 19:15:57.709258   85790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 19:15:57.718206   85790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 19:15:57.729988   85790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75369.pem && ln -fs /usr/share/ca-certificates/75369.pem /etc/ssl/certs/75369.pem"
	I0108 19:15:57.742002   85790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75369.pem
	I0108 19:15:57.748070   85790 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  9 02:38 /usr/share/ca-certificates/75369.pem
	I0108 19:15:57.748131   85790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75369.pem
	I0108 19:15:57.756809   85790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/75369.pem /etc/ssl/certs/51391683.0"
	I0108 19:15:57.769149   85790 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 19:15:57.774461   85790 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 19:15:57.774512   85790 kubeadm.go:404] StartCluster: {Name:auto-798000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-798000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 19:15:57.774650   85790 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 19:15:57.796016   85790 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 19:15:57.804511   85790 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 19:15:57.813163   85790 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0108 19:15:57.813226   85790 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 19:15:57.821459   85790 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 19:15:57.821492   85790 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 19:15:57.907483   85790 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0108 19:15:57.907539   85790 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 19:15:58.112044   85790 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 19:15:58.112124   85790 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 19:15:58.112213   85790 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 19:15:58.439805   85790 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 19:15:58.482197   85790 out.go:204]   - Generating certificates and keys ...
	I0108 19:15:58.482267   85790 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 19:15:58.482342   85790 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 19:15:58.537808   85790 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 19:15:58.619954   85790 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 19:15:58.896543   85790 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 19:15:59.019413   85790 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 19:15:59.157010   85790 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 19:15:59.157167   85790 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [auto-798000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0108 19:15:59.254718   85790 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 19:15:59.254881   85790 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [auto-798000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0108 19:15:59.333290   85790 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 19:15:59.499309   85790 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 19:15:59.561818   85790 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 19:15:59.561863   85790 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 19:15:59.673983   85790 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 19:15:59.894560   85790 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 19:16:00.032734   85790 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 19:16:00.347059   85790 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 19:16:00.347395   85790 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 19:16:00.349494   85790 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 19:16:00.394903   85790 out.go:204]   - Booting up control plane ...
	I0108 19:16:00.394979   85790 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 19:16:00.395042   85790 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 19:16:00.395099   85790 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 19:16:00.395190   85790 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 19:16:00.395268   85790 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 19:16:00.395301   85790 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 19:16:00.433874   85790 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 19:15:57.312096   85911 machine.go:88] provisioning docker machine ...
	I0108 19:15:57.312134   85911 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-658000"
	I0108 19:15:57.312236   85911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-658000
	I0108 19:15:57.367424   85911 main.go:141] libmachine: Using SSH client type: native
	I0108 19:15:57.367798   85911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 65038 <nil> <nil>}
	I0108 19:15:57.367810   85911 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-658000 && echo "kubernetes-upgrade-658000" | sudo tee /etc/hostname
	I0108 19:15:57.617239   85911 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-658000
	
	I0108 19:15:57.617347   85911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-658000
	I0108 19:15:57.682270   85911 main.go:141] libmachine: Using SSH client type: native
	I0108 19:15:57.682620   85911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 65038 <nil> <nil>}
	I0108 19:15:57.682637   85911 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-658000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-658000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-658000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 19:15:57.834930   85911 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 19:15:57.834957   85911 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17866-74927/.minikube CaCertPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17866-74927/.minikube}
	I0108 19:15:57.834975   85911 ubuntu.go:177] setting up certificates
	I0108 19:15:57.834987   85911 provision.go:83] configureAuth start
	I0108 19:15:57.835074   85911 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-658000
	I0108 19:15:57.889279   85911 provision.go:138] copyHostCerts
	I0108 19:15:57.889377   85911 exec_runner.go:144] found /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.pem, removing ...
	I0108 19:15:57.889387   85911 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.pem
	I0108 19:15:57.889510   85911 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.pem (1078 bytes)
	I0108 19:15:57.889808   85911 exec_runner.go:144] found /Users/jenkins/minikube-integration/17866-74927/.minikube/cert.pem, removing ...
	I0108 19:15:57.889816   85911 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17866-74927/.minikube/cert.pem
	I0108 19:15:57.889912   85911 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17866-74927/.minikube/cert.pem (1123 bytes)
	I0108 19:15:57.890130   85911 exec_runner.go:144] found /Users/jenkins/minikube-integration/17866-74927/.minikube/key.pem, removing ...
	I0108 19:15:57.890140   85911 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17866-74927/.minikube/key.pem
	I0108 19:15:57.890231   85911 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17866-74927/.minikube/key.pem (1679 bytes)
	I0108 19:15:57.890416   85911 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-658000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-658000]
	I0108 19:15:57.979779   85911 provision.go:172] copyRemoteCerts
	I0108 19:15:57.979864   85911 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 19:15:57.979927   85911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-658000
	I0108 19:15:58.037316   85911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65038 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/kubernetes-upgrade-658000/id_rsa Username:docker}
	I0108 19:15:58.134457   85911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0108 19:15:58.157636   85911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 19:15:58.181212   85911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 19:15:58.206344   85911 provision.go:86] duration metric: configureAuth took 371.350095ms
	I0108 19:15:58.206362   85911 ubuntu.go:193] setting minikube options for container-runtime
	I0108 19:15:58.206526   85911 config.go:182] Loaded profile config "kubernetes-upgrade-658000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0108 19:15:58.206669   85911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-658000
	I0108 19:15:58.267504   85911 main.go:141] libmachine: Using SSH client type: native
	I0108 19:15:58.267863   85911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 65038 <nil> <nil>}
	I0108 19:15:58.267877   85911 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 19:15:58.405567   85911 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0108 19:15:58.405586   85911 ubuntu.go:71] root file system type: overlay
	I0108 19:15:58.405696   85911 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 19:15:58.405784   85911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-658000
	I0108 19:15:58.487837   85911 main.go:141] libmachine: Using SSH client type: native
	I0108 19:15:58.488124   85911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 65038 <nil> <nil>}
	I0108 19:15:58.488178   85911 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 19:15:58.631856   85911 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 19:15:58.631962   85911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-658000
	I0108 19:15:58.685118   85911 main.go:141] libmachine: Using SSH client type: native
	I0108 19:15:58.685434   85911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 65038 <nil> <nil>}
	I0108 19:15:58.685447   85911 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 19:15:58.824773   85911 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 19:15:58.824803   85911 machine.go:91] provisioned docker machine in 1.512730108s
	I0108 19:15:58.824817   85911 start.go:300] post-start starting for "kubernetes-upgrade-658000" (driver="docker")
	I0108 19:15:58.824836   85911 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 19:15:58.824927   85911 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 19:15:58.824982   85911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-658000
	I0108 19:15:58.879334   85911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65038 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/kubernetes-upgrade-658000/id_rsa Username:docker}
	I0108 19:15:58.975549   85911 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 19:15:58.979848   85911 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 19:15:58.979874   85911 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 19:15:58.979882   85911 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 19:15:58.979887   85911 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0108 19:15:58.979899   85911 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17866-74927/.minikube/addons for local assets ...
	I0108 19:15:58.979982   85911 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17866-74927/.minikube/files for local assets ...
	I0108 19:15:58.980126   85911 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem -> 753692.pem in /etc/ssl/certs
	I0108 19:15:58.980278   85911 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 19:15:58.989261   85911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem --> /etc/ssl/certs/753692.pem (1708 bytes)
	I0108 19:15:59.011125   85911 start.go:303] post-start completed in 186.298139ms
	I0108 19:15:59.011197   85911 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 19:15:59.011249   85911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-658000
	I0108 19:15:59.068743   85911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65038 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/kubernetes-upgrade-658000/id_rsa Username:docker}
	I0108 19:15:59.160297   85911 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 19:15:59.166254   85911 fix.go:56] fixHost completed within 1.97604828s
	I0108 19:15:59.166278   85911 start.go:83] releasing machines lock for "kubernetes-upgrade-658000", held for 1.976106673s
	I0108 19:15:59.166388   85911 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-658000
	I0108 19:15:59.219340   85911 ssh_runner.go:195] Run: cat /version.json
	I0108 19:15:59.219364   85911 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 19:15:59.219413   85911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-658000
	I0108 19:15:59.219435   85911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-658000
	I0108 19:15:59.282631   85911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65038 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/kubernetes-upgrade-658000/id_rsa Username:docker}
	I0108 19:15:59.282644   85911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65038 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/kubernetes-upgrade-658000/id_rsa Username:docker}
	I0108 19:15:59.478902   85911 ssh_runner.go:195] Run: systemctl --version
	I0108 19:15:59.483745   85911 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 19:15:59.488904   85911 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 19:15:59.488967   85911 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0108 19:15:59.497251   85911 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0108 19:15:59.505766   85911 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0108 19:15:59.505790   85911 start.go:475] detecting cgroup driver to use...
	I0108 19:15:59.505804   85911 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 19:15:59.505921   85911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 19:15:59.520675   85911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0108 19:15:59.530674   85911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0108 19:15:59.540174   85911 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0108 19:15:59.540246   85911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0108 19:15:59.549952   85911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 19:15:59.559630   85911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0108 19:15:59.569389   85911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 19:15:59.579435   85911 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 19:15:59.588848   85911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0108 19:15:59.598318   85911 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 19:15:59.606373   85911 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 19:15:59.614597   85911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 19:15:59.688144   85911 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 19:16:05.935857   85790 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.502143 seconds
	I0108 19:16:05.936003   85790 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 19:16:05.946139   85790 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 19:16:06.463172   85790 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 19:16:06.463315   85790 kubeadm.go:322] [mark-control-plane] Marking the node auto-798000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 19:16:06.971478   85790 kubeadm.go:322] [bootstrap-token] Using token: xqjzt3.tgpzdurgb5n44kr9
	I0108 19:16:07.009339   85790 out.go:204]   - Configuring RBAC rules ...
	I0108 19:16:07.009498   85790 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 19:16:07.035161   85790 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 19:16:07.041690   85790 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 19:16:07.044373   85790 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 19:16:07.046808   85790 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 19:16:07.049209   85790 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 19:16:07.057835   85790 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 19:16:07.193545   85790 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 19:16:07.440270   85790 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 19:16:07.441304   85790 kubeadm.go:322] 
	I0108 19:16:07.441396   85790 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 19:16:07.441415   85790 kubeadm.go:322] 
	I0108 19:16:07.441508   85790 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 19:16:07.441516   85790 kubeadm.go:322] 
	I0108 19:16:07.441536   85790 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 19:16:07.441626   85790 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 19:16:07.441697   85790 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 19:16:07.441737   85790 kubeadm.go:322] 
	I0108 19:16:07.441786   85790 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 19:16:07.441794   85790 kubeadm.go:322] 
	I0108 19:16:07.441842   85790 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 19:16:07.441849   85790 kubeadm.go:322] 
	I0108 19:16:07.441894   85790 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 19:16:07.442022   85790 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 19:16:07.442144   85790 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 19:16:07.442177   85790 kubeadm.go:322] 
	I0108 19:16:07.442309   85790 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 19:16:07.442402   85790 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 19:16:07.442417   85790 kubeadm.go:322] 
	I0108 19:16:07.442498   85790 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token xqjzt3.tgpzdurgb5n44kr9 \
	I0108 19:16:07.442586   85790 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5beb8effef1659eef4619b10d6d971de9fc8e0e62e5d4843e2354495dc53f719 \
	I0108 19:16:07.442612   85790 kubeadm.go:322] 	--control-plane 
	I0108 19:16:07.442628   85790 kubeadm.go:322] 
	I0108 19:16:07.442744   85790 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 19:16:07.442756   85790 kubeadm.go:322] 
	I0108 19:16:07.442877   85790 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token xqjzt3.tgpzdurgb5n44kr9 \
	I0108 19:16:07.443053   85790 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5beb8effef1659eef4619b10d6d971de9fc8e0e62e5d4843e2354495dc53f719 
	I0108 19:16:07.445430   85790 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0108 19:16:07.445499   85790 kubeadm.go:322] 	[WARNING SystemVerification]: missing optional cgroups: hugetlb
	I0108 19:16:07.445693   85790 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 19:16:07.445737   85790 cni.go:84] Creating CNI manager for ""
	I0108 19:16:07.445762   85790 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0108 19:16:07.506643   85790 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 19:16:09.857809   85911 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (10.169906135s)
	I0108 19:16:09.857832   85911 start.go:475] detecting cgroup driver to use...
	I0108 19:16:09.857845   85911 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 19:16:09.857908   85911 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 19:16:09.878992   85911 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0108 19:16:09.879056   85911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 19:16:09.890356   85911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 19:16:09.907794   85911 ssh_runner.go:195] Run: which cri-dockerd
	I0108 19:16:09.912978   85911 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0108 19:16:09.921985   85911 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0108 19:16:09.938566   85911 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 19:16:10.029547   85911 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 19:16:10.129486   85911 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I0108 19:16:10.129565   85911 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0108 19:16:10.145638   85911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 19:16:10.230211   85911 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 19:16:10.498833   85911 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0108 19:16:10.558059   85911 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0108 19:16:10.613798   85911 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0108 19:16:10.664295   85911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 19:16:10.726383   85911 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0108 19:16:10.748501   85911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 19:16:10.813121   85911 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0108 19:16:10.903619   85911 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0108 19:16:10.903717   85911 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0108 19:16:10.908415   85911 start.go:543] Will wait 60s for crictl version
	I0108 19:16:10.908479   85911 ssh_runner.go:195] Run: which crictl
	I0108 19:16:10.912683   85911 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 19:16:10.957505   85911 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0108 19:16:10.957590   85911 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 19:16:10.982314   85911 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 19:16:07.543787   85790 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 19:16:07.553948   85790 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 19:16:07.570028   85790 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 19:16:07.570121   85790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 19:16:07.570130   85790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c4ef52eca86898c65de92fcd28450f715088c13b minikube.k8s.io/name=auto-798000 minikube.k8s.io/updated_at=2024_01_08T19_16_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 19:16:07.685210   85790 ops.go:34] apiserver oom_adj: -16
	I0108 19:16:07.685285   85790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 19:16:08.185923   85790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 19:16:08.687423   85790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 19:16:09.185353   85790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 19:16:09.687328   85790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 19:16:10.185546   85790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 19:16:10.685304   85790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 19:16:11.029448   85911 out.go:204] * Preparing Kubernetes v1.29.0-rc.2 on Docker 24.0.7 ...
	I0108 19:16:11.029602   85911 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-658000 dig +short host.docker.internal
	I0108 19:16:11.144646   85911 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0108 19:16:11.144756   85911 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0108 19:16:11.149496   85911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-658000
	I0108 19:16:11.202056   85911 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0108 19:16:11.202140   85911 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 19:16:11.222333   85911 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0108 19:16:11.222357   85911 docker.go:601] Images already preloaded, skipping extraction
	I0108 19:16:11.222435   85911 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 19:16:11.242332   85911 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0108 19:16:11.242365   85911 cache_images.go:84] Images are preloaded, skipping loading
	I0108 19:16:11.242495   85911 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0108 19:16:11.292433   85911 cni.go:84] Creating CNI manager for ""
	I0108 19:16:11.292449   85911 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0108 19:16:11.292466   85911 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 19:16:11.292503   85911 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-658000 NodeName:kubernetes-upgrade-658000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 19:16:11.292623   85911 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-658000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 19:16:11.292699   85911 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=kubernetes-upgrade-658000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-658000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 19:16:11.292760   85911 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0108 19:16:11.301223   85911 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 19:16:11.301301   85911 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 19:16:11.309669   85911 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (391 bytes)
	I0108 19:16:11.324992   85911 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0108 19:16:11.341277   85911 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2113 bytes)
	I0108 19:16:11.185216   85790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 19:16:11.685198   85790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 19:16:12.185324   85790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 19:16:12.685711   85790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 19:16:13.186366   85790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 19:16:13.686376   85790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 19:16:14.185528   85790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 19:16:14.685288   85790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 19:16:15.185260   85790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 19:16:15.686019   85790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 19:16:11.358046   85911 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0108 19:16:11.374899   85911 certs.go:56] Setting up /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000 for IP: 192.168.76.2
	I0108 19:16:11.374922   85911 certs.go:190] acquiring lock for shared ca certs: {Name:mk44dcbca6ce5cf77b3bf5ce2248b699d6553e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:16:11.375069   85911 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.key
	I0108 19:16:11.375129   85911 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.key
	I0108 19:16:11.375228   85911 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/client.key
	I0108 19:16:11.375350   85911 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/apiserver.key.31bdca25
	I0108 19:16:11.375422   85911 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/proxy-client.key
	I0108 19:16:11.375617   85911 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/75369.pem (1338 bytes)
	W0108 19:16:11.375652   85911 certs.go:433] ignoring /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/75369_empty.pem, impossibly tiny 0 bytes
	I0108 19:16:11.375661   85911 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 19:16:11.375694   85911 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem (1078 bytes)
	I0108 19:16:11.375724   85911 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem (1123 bytes)
	I0108 19:16:11.375758   85911 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/key.pem (1679 bytes)
	I0108 19:16:11.375825   85911 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem (1708 bytes)
	I0108 19:16:11.376453   85911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 19:16:11.396868   85911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 19:16:11.417101   85911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 19:16:11.437709   85911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 19:16:11.458591   85911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 19:16:11.479109   85911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 19:16:11.500003   85911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 19:16:11.520504   85911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 19:16:11.540779   85911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem --> /usr/share/ca-certificates/753692.pem (1708 bytes)
	I0108 19:16:11.561635   85911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 19:16:11.582075   85911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/75369.pem --> /usr/share/ca-certificates/75369.pem (1338 bytes)
	I0108 19:16:11.602059   85911 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 19:16:11.617531   85911 ssh_runner.go:195] Run: openssl version
	I0108 19:16:11.622856   85911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/753692.pem && ln -fs /usr/share/ca-certificates/753692.pem /etc/ssl/certs/753692.pem"
	I0108 19:16:11.631846   85911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/753692.pem
	I0108 19:16:11.635780   85911 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  9 02:38 /usr/share/ca-certificates/753692.pem
	I0108 19:16:11.635830   85911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/753692.pem
	I0108 19:16:11.642176   85911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/753692.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 19:16:11.650538   85911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 19:16:11.659548   85911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 19:16:11.663658   85911 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  9 02:33 /usr/share/ca-certificates/minikubeCA.pem
	I0108 19:16:11.663710   85911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 19:16:11.670224   85911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 19:16:11.678482   85911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75369.pem && ln -fs /usr/share/ca-certificates/75369.pem /etc/ssl/certs/75369.pem"
	I0108 19:16:11.687786   85911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75369.pem
	I0108 19:16:11.693297   85911 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  9 02:38 /usr/share/ca-certificates/75369.pem
	I0108 19:16:11.693373   85911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75369.pem
	I0108 19:16:11.700410   85911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/75369.pem /etc/ssl/certs/51391683.0"
	I0108 19:16:11.709449   85911 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 19:16:11.713851   85911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 19:16:11.720580   85911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 19:16:11.727284   85911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 19:16:11.733847   85911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 19:16:11.741354   85911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 19:16:11.748330   85911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 19:16:11.755346   85911 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-658000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-658000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 19:16:11.755455   85911 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 19:16:11.775277   85911 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 19:16:11.783962   85911 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0108 19:16:11.783978   85911 kubeadm.go:636] restartCluster start
	I0108 19:16:11.784033   85911 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 19:16:11.792077   85911 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:16:11.792157   85911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-658000
	I0108 19:16:11.844151   85911 kubeconfig.go:92] found "kubernetes-upgrade-658000" server: "https://127.0.0.1:65037"
	I0108 19:16:11.844699   85911 kapi.go:59] client config for kubernetes-upgrade-658000: &rest.Config{Host:"https://127.0.0.1:65037", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/client.key", CAFile:"/Users/jenkins/minikube-integration/17866-74927/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27e8720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 19:16:11.845408   85911 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 19:16:11.853974   85911 api_server.go:166] Checking apiserver status ...
	I0108 19:16:11.854036   85911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:16:11.863201   85911 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:16:12.354298   85911 api_server.go:166] Checking apiserver status ...
	I0108 19:16:12.354412   85911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:16:12.365409   85911 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:16:12.853994   85911 api_server.go:166] Checking apiserver status ...
	I0108 19:16:12.854065   85911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:16:12.863845   85911 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:16:13.354372   85911 api_server.go:166] Checking apiserver status ...
	I0108 19:16:13.354451   85911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:16:13.364207   85911 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:16:13.854726   85911 api_server.go:166] Checking apiserver status ...
	I0108 19:16:13.854817   85911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:16:13.866032   85911 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:16:14.353979   85911 api_server.go:166] Checking apiserver status ...
	I0108 19:16:14.354099   85911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:16:14.365149   85911 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:16:14.853928   85911 api_server.go:166] Checking apiserver status ...
	I0108 19:16:14.854006   85911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:16:14.908862   85911 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:16:15.353983   85911 api_server.go:166] Checking apiserver status ...
	I0108 19:16:15.354088   85911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:16:15.410220   85911 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4484/cgroup
	W0108 19:16:15.421259   85911 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4484/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:16:15.421323   85911 ssh_runner.go:195] Run: ls
	I0108 19:16:15.425966   85911 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:65037/healthz ...
	I0108 19:16:16.186352   85790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 19:16:16.686630   85790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 19:16:17.185185   85790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 19:16:17.685222   85790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 19:16:18.185180   85790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 19:16:18.685098   85790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 19:16:19.185410   85790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 19:16:19.448854   85790 kubeadm.go:1088] duration metric: took 11.879098793s to wait for elevateKubeSystemPrivileges.
	I0108 19:16:19.448874   85790 kubeadm.go:406] StartCluster complete in 21.674925128s
	I0108 19:16:19.448887   85790 settings.go:142] acquiring lock: {Name:mk7fdf0cdaaa885ecc8ed27d1c431ecf7550f639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:16:19.448964   85790 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17866-74927/kubeconfig
	I0108 19:16:19.449715   85790 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/kubeconfig: {Name:mka56893876a255b4247f6735103824515326092 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:16:19.449995   85790 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 19:16:19.450025   85790 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 19:16:19.450132   85790 addons.go:69] Setting default-storageclass=true in profile "auto-798000"
	I0108 19:16:19.450141   85790 addons.go:69] Setting storage-provisioner=true in profile "auto-798000"
	I0108 19:16:19.450161   85790 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-798000"
	I0108 19:16:19.450162   85790 addons.go:237] Setting addon storage-provisioner=true in "auto-798000"
	I0108 19:16:19.450191   85790 config.go:182] Loaded profile config "auto-798000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 19:16:19.450211   85790 host.go:66] Checking if "auto-798000" exists ...
	I0108 19:16:19.450425   85790 cli_runner.go:164] Run: docker container inspect auto-798000 --format={{.State.Status}}
	I0108 19:16:19.451454   85790 cli_runner.go:164] Run: docker container inspect auto-798000 --format={{.State.Status}}
	I0108 19:16:19.539363   85790 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 19:16:19.518174   85790 addons.go:237] Setting addon default-storageclass=true in "auto-798000"
	I0108 19:16:19.539407   85790 host.go:66] Checking if "auto-798000" exists ...
	I0108 19:16:19.559416   85790 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 19:16:19.559431   85790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 19:16:19.559529   85790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-798000
	I0108 19:16:19.559754   85790 cli_runner.go:164] Run: docker container inspect auto-798000 --format={{.State.Status}}
	I0108 19:16:19.633323   85790 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 19:16:19.649496   85790 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 19:16:19.649516   85790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 19:16:19.649618   85790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-798000
	I0108 19:16:19.649734   85790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65097 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/auto-798000/id_rsa Username:docker}
	I0108 19:16:19.721385   85790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65097 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/auto-798000/id_rsa Username:docker}
	I0108 19:16:19.841511   85790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 19:16:19.938764   85790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 19:16:20.029248   85790 kapi.go:248] "coredns" deployment in "kube-system" namespace and "auto-798000" context rescaled to 1 replicas
	I0108 19:16:20.029285   85790 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0108 19:16:20.053162   85790 out.go:177] * Verifying Kubernetes components...
	I0108 19:16:20.095294   85790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 19:16:20.841832   85790 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.208480881s)
	I0108 19:16:20.841855   85790 start.go:929] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I0108 19:16:21.241591   85790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.400076277s)
	I0108 19:16:21.241612   85790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.302860241s)
	I0108 19:16:21.241708   85790 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.146426004s)
	I0108 19:16:21.241865   85790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" auto-798000
	I0108 19:16:21.274714   85790 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0108 19:16:17.934913   85911 api_server.go:279] https://127.0.0.1:65037/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 19:16:17.934947   85911 retry.go:31] will retry after 241.413011ms: https://127.0.0.1:65037/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 19:16:18.176713   85911 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:65037/healthz ...
	I0108 19:16:18.181704   85911 api_server.go:279] https://127.0.0.1:65037/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 19:16:18.181723   85911 retry.go:31] will retry after 244.927133ms: https://127.0.0.1:65037/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 19:16:18.426737   85911 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:65037/healthz ...
	I0108 19:16:18.432854   85911 api_server.go:279] https://127.0.0.1:65037/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 19:16:18.432875   85911 retry.go:31] will retry after 414.599577ms: https://127.0.0.1:65037/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 19:16:18.847543   85911 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:65037/healthz ...
	I0108 19:16:18.852521   85911 api_server.go:279] https://127.0.0.1:65037/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 19:16:18.852540   85911 retry.go:31] will retry after 593.661117ms: https://127.0.0.1:65037/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 19:16:19.446381   85911 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:65037/healthz ...
	I0108 19:16:19.451765   85911 api_server.go:279] https://127.0.0.1:65037/healthz returned 200:
	ok
	I0108 19:16:19.467634   85911 system_pods.go:86] 5 kube-system pods found
	I0108 19:16:19.467666   85911 system_pods.go:89] "etcd-kubernetes-upgrade-658000" [9ac6d7e1-d21f-46b7-8465-eb5d1d6f0d01] Pending
	I0108 19:16:19.467685   85911 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-658000" [6fa536d8-4229-4bea-86d8-e82654bbc355] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 19:16:19.467698   85911 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-658000" [6f673d38-4ce2-45cc-9aea-6fa2c72d0458] Pending
	I0108 19:16:19.467732   85911 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-658000" [264fa34c-e322-4c02-ae87-c47f3438cc7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 19:16:19.467763   85911 system_pods.go:89] "storage-provisioner" [ee7998cb-5cdc-4d28-94b8-3ee55877fdc9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 19:16:19.467782   85911 kubeadm.go:620] needs reconfigure: missing components: kube-dns, etcd, kube-controller-manager, kube-proxy
	I0108 19:16:19.467792   85911 kubeadm.go:1135] stopping kube-system containers ...
	I0108 19:16:19.467901   85911 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 19:16:19.496145   85911 docker.go:469] Stopping containers: [35e960afd88a 1f93f714fa9f ce18049da2fc 53ddcdf14302 4176f67f0ace 4f53ff005d90 d03d91c37f8e b95b8839a2ec 97a8764f329d ef668b5b3f00 ff94a6655272 6936284eabc4 1b32fe788dbf c009d32b0398 8093eb47fbf1 bb3026748142]
	I0108 19:16:19.496235   85911 ssh_runner.go:195] Run: docker stop 35e960afd88a 1f93f714fa9f ce18049da2fc 53ddcdf14302 4176f67f0ace 4f53ff005d90 d03d91c37f8e b95b8839a2ec 97a8764f329d ef668b5b3f00 ff94a6655272 6936284eabc4 1b32fe788dbf c009d32b0398 8093eb47fbf1 bb3026748142
	I0108 19:16:20.018488   85911 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 19:16:20.054911   85911 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 19:16:20.064562   85911 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5703 Jan  9 03:13 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5739 Jan  9 03:13 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5819 Jan  9 03:13 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5691 Jan  9 03:13 /etc/kubernetes/scheduler.conf
	
	I0108 19:16:20.064635   85911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0108 19:16:20.074754   85911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0108 19:16:20.083871   85911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0108 19:16:20.092488   85911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0108 19:16:20.101851   85911 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 19:16:20.112169   85911 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 19:16:20.112184   85911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 19:16:20.223869   85911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 19:16:21.309917   85911 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.086052741s)
	I0108 19:16:21.309931   85911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 19:16:21.295501   85790 addons.go:508] enable addons completed in 1.84553687s: enabled=[storage-provisioner default-storageclass]
	I0108 19:16:21.303436   85790 node_ready.go:35] waiting up to 15m0s for node "auto-798000" to be "Ready" ...
	I0108 19:16:21.328404   85790 node_ready.go:49] node "auto-798000" has status "Ready":"True"
	I0108 19:16:21.328439   85790 node_ready.go:38] duration metric: took 24.980554ms waiting for node "auto-798000" to be "Ready" ...
	I0108 19:16:21.328468   85790 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 19:16:21.338623   85790 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-5ntvk" in "kube-system" namespace to be "Ready" ...
	I0108 19:16:21.842831   85790 pod_ready.go:97] error getting pod "coredns-5dd5756b68-5ntvk" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-5ntvk" not found
	I0108 19:16:21.842857   85790 pod_ready.go:81] duration metric: took 504.225511ms waiting for pod "coredns-5dd5756b68-5ntvk" in "kube-system" namespace to be "Ready" ...
	E0108 19:16:21.842868   85790 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-5ntvk" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-5ntvk" not found
	I0108 19:16:21.842875   85790 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-w7jls" in "kube-system" namespace to be "Ready" ...
	I0108 19:16:23.851368   85790 pod_ready.go:102] pod "coredns-5dd5756b68-w7jls" in "kube-system" namespace has status "Ready":"False"
	I0108 19:16:21.467093   85911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 19:16:21.515158   85911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 19:16:21.609158   85911 api_server.go:52] waiting for apiserver process to appear ...
	I0108 19:16:21.609275   85911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:16:22.109345   85911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:16:22.609364   85911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:16:22.620835   85911 api_server.go:72] duration metric: took 1.011702909s to wait for apiserver process to appear ...
	I0108 19:16:22.620849   85911 api_server.go:88] waiting for apiserver healthz status ...
	I0108 19:16:22.620864   85911 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:65037/healthz ...
	I0108 19:16:25.201305   85911 api_server.go:279] https://127.0.0.1:65037/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 19:16:25.201323   85911 api_server.go:103] status: https://127.0.0.1:65037/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 19:16:25.201334   85911 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:65037/healthz ...
	I0108 19:16:25.211900   85911 api_server.go:279] https://127.0.0.1:65037/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 19:16:25.211950   85911 api_server.go:103] status: https://127.0.0.1:65037/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 19:16:25.621625   85911 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:65037/healthz ...
	I0108 19:16:25.627944   85911 api_server.go:279] https://127.0.0.1:65037/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 19:16:25.627962   85911 api_server.go:103] status: https://127.0.0.1:65037/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 19:16:26.122308   85911 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:65037/healthz ...
	I0108 19:16:26.127812   85911 api_server.go:279] https://127.0.0.1:65037/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 19:16:26.127829   85911 api_server.go:103] status: https://127.0.0.1:65037/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 19:16:26.622890   85911 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:65037/healthz ...
	I0108 19:16:26.630149   85911 api_server.go:279] https://127.0.0.1:65037/healthz returned 200:
	ok
	I0108 19:16:26.636704   85911 api_server.go:141] control plane version: v1.29.0-rc.2
	I0108 19:16:26.636718   85911 api_server.go:131] duration metric: took 4.015966211s to wait for apiserver health ...
	I0108 19:16:26.636724   85911 cni.go:84] Creating CNI manager for ""
	I0108 19:16:26.636732   85911 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0108 19:16:26.658173   85911 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 19:16:26.679968   85911 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 19:16:26.689693   85911 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 19:16:26.704946   85911 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 19:16:26.710482   85911 system_pods.go:59] 5 kube-system pods found
	I0108 19:16:26.710497   85911 system_pods.go:61] "etcd-kubernetes-upgrade-658000" [9ac6d7e1-d21f-46b7-8465-eb5d1d6f0d01] Pending
	I0108 19:16:26.710509   85911 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-658000" [6fa536d8-4229-4bea-86d8-e82654bbc355] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 19:16:26.710513   85911 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-658000" [6f673d38-4ce2-45cc-9aea-6fa2c72d0458] Pending
	I0108 19:16:26.710519   85911 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-658000" [264fa34c-e322-4c02-ae87-c47f3438cc7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 19:16:26.710531   85911 system_pods.go:61] "storage-provisioner" [ee7998cb-5cdc-4d28-94b8-3ee55877fdc9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 19:16:26.710535   85911 system_pods.go:74] duration metric: took 5.579292ms to wait for pod list to return data ...
	I0108 19:16:26.710544   85911 node_conditions.go:102] verifying NodePressure condition ...
	I0108 19:16:26.713592   85911 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I0108 19:16:26.713607   85911 node_conditions.go:123] node cpu capacity is 12
	I0108 19:16:26.713618   85911 node_conditions.go:105] duration metric: took 3.070905ms to run NodePressure ...
	I0108 19:16:26.713629   85911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 19:16:26.966056   85911 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 19:16:26.973594   85911 ops.go:34] apiserver oom_adj: -16
	I0108 19:16:26.973608   85911 kubeadm.go:640] restartCluster took 15.190013026s
	I0108 19:16:26.973616   85911 kubeadm.go:406] StartCluster complete in 15.218673725s
	I0108 19:16:26.973627   85911 settings.go:142] acquiring lock: {Name:mk7fdf0cdaaa885ecc8ed27d1c431ecf7550f639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:16:26.973716   85911 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17866-74927/kubeconfig
	I0108 19:16:26.974402   85911 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/kubeconfig: {Name:mka56893876a255b4247f6735103824515326092 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:16:26.974683   85911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 19:16:26.974717   85911 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 19:16:26.974768   85911 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-658000"
	I0108 19:16:26.974781   85911 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-658000"
	I0108 19:16:26.974784   85911 addons.go:237] Setting addon storage-provisioner=true in "kubernetes-upgrade-658000"
	W0108 19:16:26.974792   85911 addons.go:246] addon storage-provisioner should already be in state true
	I0108 19:16:26.974797   85911 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-658000"
	I0108 19:16:26.974832   85911 config.go:182] Loaded profile config "kubernetes-upgrade-658000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0108 19:16:26.974849   85911 host.go:66] Checking if "kubernetes-upgrade-658000" exists ...
	I0108 19:16:26.975069   85911 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-658000 --format={{.State.Status}}
	I0108 19:16:26.975175   85911 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-658000 --format={{.State.Status}}
	I0108 19:16:26.976213   85911 kapi.go:59] client config for kubernetes-upgrade-658000: &rest.Config{Host:"https://127.0.0.1:65037", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/client.key", CAFile:"/Users/jenkins/minikube-integration/17866-74927/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27e8720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 19:16:26.982789   85911 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubernetes-upgrade-658000" context rescaled to 1 replicas
	I0108 19:16:26.982821   85911 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0108 19:16:27.020047   85911 out.go:177] * Verifying Kubernetes components...
	I0108 19:16:27.056490   85911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 19:16:27.087592   85911 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 19:16:27.067356   85911 kapi.go:59] client config for kubernetes-upgrade-658000: &rest.Config{Host:"https://127.0.0.1:65037", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubernetes-upgrade-658000/client.key", CAFile:"/Users/jenkins/minikube-integration/17866-74927/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27e8720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 19:16:27.067573   85911 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0108 19:16:27.072676   85911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-658000
	I0108 19:16:27.087843   85911 addons.go:237] Setting addon default-storageclass=true in "kubernetes-upgrade-658000"
	W0108 19:16:27.124622   85911 addons.go:246] addon default-storageclass should already be in state true
	I0108 19:16:27.124643   85911 host.go:66] Checking if "kubernetes-upgrade-658000" exists ...
	I0108 19:16:27.124653   85911 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 19:16:27.124665   85911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 19:16:27.124752   85911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-658000
	I0108 19:16:27.125923   85911 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-658000 --format={{.State.Status}}
	I0108 19:16:27.190806   85911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65038 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/kubernetes-upgrade-658000/id_rsa Username:docker}
	I0108 19:16:27.190837   85911 api_server.go:52] waiting for apiserver process to appear ...
	I0108 19:16:27.190915   85911 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 19:16:27.190938   85911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 19:16:27.191002   85911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:16:27.191054   85911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-658000
	I0108 19:16:27.207069   85911 api_server.go:72] duration metric: took 224.218825ms to wait for apiserver process to appear ...
	I0108 19:16:27.207103   85911 api_server.go:88] waiting for apiserver healthz status ...
	I0108 19:16:27.207142   85911 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:65037/healthz ...
	I0108 19:16:27.215175   85911 api_server.go:279] https://127.0.0.1:65037/healthz returned 200:
	ok
	I0108 19:16:27.217605   85911 api_server.go:141] control plane version: v1.29.0-rc.2
	I0108 19:16:27.217623   85911 api_server.go:131] duration metric: took 10.513908ms to wait for apiserver health ...
	I0108 19:16:27.217629   85911 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 19:16:27.225557   85911 system_pods.go:59] 5 kube-system pods found
	I0108 19:16:27.225579   85911 system_pods.go:61] "etcd-kubernetes-upgrade-658000" [9ac6d7e1-d21f-46b7-8465-eb5d1d6f0d01] Pending
	I0108 19:16:27.225592   85911 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-658000" [6fa536d8-4229-4bea-86d8-e82654bbc355] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 19:16:27.225602   85911 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-658000" [6f673d38-4ce2-45cc-9aea-6fa2c72d0458] Pending
	I0108 19:16:27.225638   85911 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-658000" [264fa34c-e322-4c02-ae87-c47f3438cc7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 19:16:27.225662   85911 system_pods.go:61] "storage-provisioner" [ee7998cb-5cdc-4d28-94b8-3ee55877fdc9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 19:16:27.225676   85911 system_pods.go:74] duration metric: took 8.040588ms to wait for pod list to return data ...
	I0108 19:16:27.225688   85911 kubeadm.go:581] duration metric: took 242.844448ms to wait for : map[apiserver:true system_pods:true] ...
	I0108 19:16:27.225706   85911 node_conditions.go:102] verifying NodePressure condition ...
	I0108 19:16:27.230084   85911 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I0108 19:16:27.230107   85911 node_conditions.go:123] node cpu capacity is 12
	I0108 19:16:27.230184   85911 node_conditions.go:105] duration metric: took 4.464366ms to run NodePressure ...
	I0108 19:16:27.230215   85911 start.go:228] waiting for startup goroutines ...
	I0108 19:16:27.260284   85911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65038 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/kubernetes-upgrade-658000/id_rsa Username:docker}
	I0108 19:16:27.314933   85911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 19:16:27.455924   85911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 19:16:27.908843   85911 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0108 19:16:27.929860   85911 addons.go:508] enable addons completed in 955.18647ms: enabled=[storage-provisioner default-storageclass]
	I0108 19:16:27.929880   85911 start.go:233] waiting for cluster config update ...
	I0108 19:16:27.929892   85911 start.go:242] writing updated cluster config ...
	I0108 19:16:27.930243   85911 ssh_runner.go:195] Run: rm -f paused
	I0108 19:16:27.970823   85911 start.go:600] kubectl: 1.28.2, cluster: 1.29.0-rc.2 (minor skew: 1)
	I0108 19:16:27.991693   85911 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-658000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jan 09 03:16:10 kubernetes-upgrade-658000 cri-dockerd[3926]: time="2024-01-09T03:16:10Z" level=info msg="Setting cgroupDriver cgroupfs"
	Jan 09 03:16:10 kubernetes-upgrade-658000 cri-dockerd[3926]: time="2024-01-09T03:16:10Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Jan 09 03:16:10 kubernetes-upgrade-658000 cri-dockerd[3926]: time="2024-01-09T03:16:10Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Jan 09 03:16:10 kubernetes-upgrade-658000 cri-dockerd[3926]: time="2024-01-09T03:16:10Z" level=info msg="Start cri-dockerd grpc backend"
	Jan 09 03:16:10 kubernetes-upgrade-658000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Jan 09 03:16:14 kubernetes-upgrade-658000 cri-dockerd[3926]: time="2024-01-09T03:16:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d03d91c37f8e2e9731811e1a934bc9762ebe7eff9359426019e4f7e2698b96b4/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jan 09 03:16:14 kubernetes-upgrade-658000 cri-dockerd[3926]: time="2024-01-09T03:16:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b95b8839a2ecfb4d219d6d54e44975b03e39b9d8aca924bb187a18751255a294/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jan 09 03:16:14 kubernetes-upgrade-658000 cri-dockerd[3926]: time="2024-01-09T03:16:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4f53ff005d904aa2055b401701b5af66099a1d86a2c9b8342de797bc45027a99/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jan 09 03:16:15 kubernetes-upgrade-658000 cri-dockerd[3926]: time="2024-01-09T03:16:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4176f67f0aced1c2fbc3ecd0c004452e73e80adc8a38664f8301186262688541/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jan 09 03:16:19 kubernetes-upgrade-658000 dockerd[3638]: time="2024-01-09T03:16:19.625552154Z" level=info msg="ignoring event" container=1f93f714fa9feac915c09e86d799dd9d6414915b55145e867bc4be428d749e05 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 03:16:19 kubernetes-upgrade-658000 dockerd[3638]: time="2024-01-09T03:16:19.625728107Z" level=info msg="ignoring event" container=b95b8839a2ecfb4d219d6d54e44975b03e39b9d8aca924bb187a18751255a294 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 03:16:19 kubernetes-upgrade-658000 dockerd[3638]: time="2024-01-09T03:16:19.625840355Z" level=info msg="ignoring event" container=d03d91c37f8e2e9731811e1a934bc9762ebe7eff9359426019e4f7e2698b96b4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 03:16:19 kubernetes-upgrade-658000 dockerd[3638]: time="2024-01-09T03:16:19.625871558Z" level=info msg="ignoring event" container=4176f67f0aced1c2fbc3ecd0c004452e73e80adc8a38664f8301186262688541 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 03:16:19 kubernetes-upgrade-658000 dockerd[3638]: time="2024-01-09T03:16:19.697299636Z" level=info msg="ignoring event" container=ce18049da2fca6ed26badd9983b55dd52b68c68610fa8e8a675f36ffe2480859 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 03:16:19 kubernetes-upgrade-658000 dockerd[3638]: time="2024-01-09T03:16:19.704206107Z" level=info msg="ignoring event" container=4f53ff005d904aa2055b401701b5af66099a1d86a2c9b8342de797bc45027a99 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 03:16:19 kubernetes-upgrade-658000 dockerd[3638]: time="2024-01-09T03:16:19.706492677Z" level=info msg="ignoring event" container=35e960afd88ad373a9a7707549ee105e7ec26dc4b324a538a8a35576a62598af module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 03:16:20 kubernetes-upgrade-658000 dockerd[3638]: time="2024-01-09T03:16:20.000047895Z" level=info msg="ignoring event" container=53ddcdf14302ae483f78b5d5dd92b326d52b009fb5c066a01acdad7a66c61176 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 09 03:16:20 kubernetes-upgrade-658000 cri-dockerd[3926]: time="2024-01-09T03:16:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e78d9209619ad7cf806c5e6003818e56e9b9af2a36ee179273d92060c6a663ed/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jan 09 03:16:20 kubernetes-upgrade-658000 cri-dockerd[3926]: W0109 03:16:20.319961    3926 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Jan 09 03:16:20 kubernetes-upgrade-658000 cri-dockerd[3926]: time="2024-01-09T03:16:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/77b392b82fd5658621018ac719b5d4fd22417f4240606f6555c81a1c0430a852/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jan 09 03:16:20 kubernetes-upgrade-658000 cri-dockerd[3926]: W0109 03:16:20.323101    3926 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Jan 09 03:16:20 kubernetes-upgrade-658000 cri-dockerd[3926]: time="2024-01-09T03:16:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3894f09a347c156c3ecef934c92b1c432b6cf1fe697d6e14fc86c3865656b61a/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jan 09 03:16:20 kubernetes-upgrade-658000 cri-dockerd[3926]: W0109 03:16:20.329811    3926 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Jan 09 03:16:20 kubernetes-upgrade-658000 cri-dockerd[3926]: time="2024-01-09T03:16:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/426c23aef60478bcecf83302c18b269addc94d6e85731ae2069ee9ae5906aed8/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jan 09 03:16:20 kubernetes-upgrade-658000 cri-dockerd[3926]: W0109 03:16:20.336601    3926 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a2902eb32d8f6       bbb47a0f83324       7 seconds ago       Running             kube-apiserver            2                   426c23aef6047       kube-apiserver-kubernetes-upgrade-658000
	3590071b218a3       a0eed15eed449       7 seconds ago       Running             etcd                      2                   77b392b82fd56       etcd-kubernetes-upgrade-658000
	193f3338f38d1       4270645ed6b7a       7 seconds ago       Running             kube-scheduler            2                   e78d9209619ad       kube-scheduler-kubernetes-upgrade-658000
	1408ee1a6824d       d4e01cdf63970       7 seconds ago       Running             kube-controller-manager   2                   3894f09a347c1       kube-controller-manager-kubernetes-upgrade-658000
	35e960afd88ad       d4e01cdf63970       14 seconds ago      Exited              kube-controller-manager   1                   4176f67f0aced       kube-controller-manager-kubernetes-upgrade-658000
	1f93f714fa9fe       4270645ed6b7a       14 seconds ago      Exited              kube-scheduler            1                   4f53ff005d904       kube-scheduler-kubernetes-upgrade-658000
	ce18049da2fca       a0eed15eed449       14 seconds ago      Exited              etcd                      1                   b95b8839a2ecf       etcd-kubernetes-upgrade-658000
	53ddcdf14302a       bbb47a0f83324       14 seconds ago      Exited              kube-apiserver            1                   d03d91c37f8e2       kube-apiserver-kubernetes-upgrade-658000
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-658000
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-658000
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jan 2024 03:15:51 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-658000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jan 2024 03:16:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jan 2024 03:16:25 +0000   Tue, 09 Jan 2024 03:15:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jan 2024 03:16:25 +0000   Tue, 09 Jan 2024 03:15:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jan 2024 03:16:25 +0000   Tue, 09 Jan 2024 03:15:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jan 2024 03:16:25 +0000   Tue, 09 Jan 2024 03:15:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    kubernetes-upgrade-658000
	Capacity:
	  cpu:                12
	  ephemeral-storage:  115273188Ki
	  memory:             6075464Ki
	  pods:               110
	Allocatable:
	  cpu:                12
	  ephemeral-storage:  115273188Ki
	  memory:             6075464Ki
	  pods:               110
	System Info:
	  Machine ID:                 b2d565f7733248efa28db2b1b72ad557
	  System UUID:                b2d565f7733248efa28db2b1b72ad557
	  Boot ID:                    ad2dc5da-3dd5-4557-b7f2-cf88c9605f21
	  Kernel Version:             6.5.11-linuxkit
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-658000                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         11s
	  kube-system                 kube-apiserver-kubernetes-upgrade-658000             250m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-658000    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  kube-system                 kube-scheduler-kubernetes-upgrade-658000             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (5%!)(MISSING)   0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 42s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  42s (x8 over 42s)  kubelet  Node kubernetes-upgrade-658000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s (x8 over 42s)  kubelet  Node kubernetes-upgrade-658000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s (x7 over 42s)  kubelet  Node kubernetes-upgrade-658000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  42s                kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 8s                 kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet  Node kubernetes-upgrade-658000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet  Node kubernetes-upgrade-658000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)    kubelet  Node kubernetes-upgrade-658000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet  Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	
	
	==> etcd [3590071b218a] <==
	{"level":"info","ts":"2024-01-09T03:16:22.420942Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-09T03:16:22.420954Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-09T03:16:22.420991Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2024-01-09T03:16:22.421036Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2024-01-09T03:16:22.421509Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-09T03:16:22.421572Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-09T03:16:22.42324Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-09T03:16:22.423382Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-01-09T03:16:22.423398Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-01-09T03:16:22.423855Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-09T03:16:22.423999Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-09T03:16:23.913686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 3"}
	{"level":"info","ts":"2024-01-09T03:16:23.913801Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-01-09T03:16:23.913821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-01-09T03:16:23.913835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 4"}
	{"level":"info","ts":"2024-01-09T03:16:23.91386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2024-01-09T03:16:23.913873Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 4"}
	{"level":"info","ts":"2024-01-09T03:16:23.913882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2024-01-09T03:16:23.915816Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-658000 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-09T03:16:23.915899Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-09T03:16:23.916071Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-09T03:16:23.916703Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-09T03:16:23.916817Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-09T03:16:23.919758Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2024-01-09T03:16:23.921072Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [ce18049da2fc] <==
	{"level":"info","ts":"2024-01-09T03:16:15.408967Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-01-09T03:16:16.830983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2024-01-09T03:16:16.83105Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-01-09T03:16:16.831078Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2024-01-09T03:16:16.831093Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2024-01-09T03:16:16.831099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-01-09T03:16:16.831107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2024-01-09T03:16:16.831114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-01-09T03:16:16.832786Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-658000 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-09T03:16:16.832834Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-09T03:16:16.833034Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-09T03:16:16.833474Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-09T03:16:16.833509Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-09T03:16:16.839748Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-09T03:16:16.841143Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2024-01-09T03:16:19.583657Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-01-09T03:16:19.583786Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"kubernetes-upgrade-658000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"warn","ts":"2024-01-09T03:16:19.583931Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-01-09T03:16:19.584098Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-01-09T03:16:19.599224Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-01-09T03:16:19.599407Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-01-09T03:16:19.59953Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2024-01-09T03:16:19.602463Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-01-09T03:16:19.602813Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-01-09T03:16:19.602839Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"kubernetes-upgrade-658000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 03:16:30 up  2:35,  0 users,  load average: 2.02, 1.72, 1.27
	Linux kubernetes-upgrade-658000 6.5.11-linuxkit #1 SMP PREEMPT_DYNAMIC Mon Dec  4 10:03:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kube-apiserver [53ddcdf14302] <==
	W0109 03:16:19.587681       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0109 03:16:19.587712       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0109 03:16:19.587712       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0109 03:16:19.587742       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0109 03:16:19.587746       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0109 03:16:19.587787       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0109 03:16:19.587817       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0109 03:16:19.587857       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0109 03:16:19.588016       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0109 03:16:19.588057       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0109 03:16:19.588063       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0109 03:16:19.588115       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0109 03:16:19.588164       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0109 03:16:19.588168       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0109 03:16:19.588168       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0109 03:16:19.588209       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0109 03:16:19.588217       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0109 03:16:19.588254       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0109 03:16:19.588254       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0109 03:16:19.588279       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0109 03:16:19.588311       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0109 03:16:19.588446       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0109 03:16:19.588466       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0109 03:16:19.588766       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0109 03:16:19.596717       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [a2902eb32d8f] <==
	I0109 03:16:25.138492       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0109 03:16:25.196792       1 establishing_controller.go:76] Starting EstablishingController
	I0109 03:16:25.196943       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0109 03:16:25.196977       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0109 03:16:25.197010       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0109 03:16:25.302375       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0109 03:16:25.302792       1 aggregator.go:165] initial CRD sync complete...
	I0109 03:16:25.302854       1 autoregister_controller.go:141] Starting autoregister controller
	I0109 03:16:25.302921       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0109 03:16:25.307636       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0109 03:16:25.340629       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0109 03:16:25.396732       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0109 03:16:25.397112       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0109 03:16:25.397397       1 shared_informer.go:318] Caches are synced for configmaps
	I0109 03:16:25.397598       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0109 03:16:25.397771       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0109 03:16:25.399396       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0109 03:16:25.404544       1 cache.go:39] Caches are synced for autoregister controller
	E0109 03:16:25.405493       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0109 03:16:26.141171       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0109 03:16:26.787334       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0109 03:16:26.794715       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0109 03:16:26.814958       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0109 03:16:26.830298       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0109 03:16:26.835684       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [1408ee1a6824] <==
	I0109 03:16:27.317221       1 controllermanager.go:735] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0109 03:16:27.317438       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0109 03:16:27.317477       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0109 03:16:27.319586       1 controllermanager.go:735] "Started controller" controller="endpoints-controller"
	I0109 03:16:27.319739       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0109 03:16:27.319745       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0109 03:16:27.327600       1 controllermanager.go:735] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0109 03:16:27.327641       1 horizontal.go:200] "Starting HPA controller"
	I0109 03:16:27.327654       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0109 03:16:27.330927       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0109 03:16:27.330973       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0109 03:16:27.331339       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0109 03:16:27.332416       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0109 03:16:27.332499       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0109 03:16:27.332525       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0109 03:16:27.333616       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0109 03:16:27.333655       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0109 03:16:27.333847       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0109 03:16:27.335049       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0109 03:16:27.335097       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0109 03:16:27.335085       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0109 03:16:27.335370       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0109 03:16:27.400655       1 controllermanager.go:735] "Started controller" controller="deployment-controller"
	I0109 03:16:27.400733       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0109 03:16:27.400749       1 shared_informer.go:311] Waiting for caches to sync for deployment
	
	
	==> kube-controller-manager [35e960afd88a] <==
	I0109 03:16:15.699589       1 serving.go:380] Generated self-signed cert in-memory
	I0109 03:16:16.133849       1 controllermanager.go:187] "Starting" version="v1.29.0-rc.2"
	I0109 03:16:16.133888       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0109 03:16:16.135146       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0109 03:16:16.135198       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0109 03:16:16.135359       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0109 03:16:16.135451       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	
	==> kube-scheduler [193f3338f38d] <==
	I0109 03:16:23.089320       1 serving.go:380] Generated self-signed cert in-memory
	W0109 03:16:25.209251       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0109 03:16:25.209417       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0109 03:16:25.209560       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0109 03:16:25.209656       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0109 03:16:25.309395       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0109 03:16:25.309470       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0109 03:16:25.310915       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0109 03:16:25.311093       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0109 03:16:25.311440       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0109 03:16:25.311470       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0109 03:16:25.411918       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [1f93f714fa9f] <==
	I0109 03:16:15.708083       1 serving.go:380] Generated self-signed cert in-memory
	W0109 03:16:18.006883       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0109 03:16:18.007002       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0109 03:16:18.007036       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0109 03:16:18.007066       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0109 03:16:18.100107       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0109 03:16:18.100196       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0109 03:16:18.101858       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0109 03:16:18.101921       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0109 03:16:18.102505       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0109 03:16:18.102732       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0109 03:16:18.203015       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0109 03:16:19.589367       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0109 03:16:19.589470       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0109 03:16:19.589672       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0109 03:16:19.591366       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jan 09 03:16:21 kubernetes-upgrade-658000 kubelet[5182]: I0109 03:16:21.819757    5182 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4176f67f0aced1c2fbc3ecd0c004452e73e80adc8a38664f8301186262688541"
	Jan 09 03:16:21 kubernetes-upgrade-658000 kubelet[5182]: I0109 03:16:21.819766    5182 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c009d32b0398c5a8e26ef06dfd816a7a042d7f8c929788cdb17da348f1913ca5"
	Jan 09 03:16:21 kubernetes-upgrade-658000 kubelet[5182]: I0109 03:16:21.819777    5182 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b95b8839a2ecfb4d219d6d54e44975b03e39b9d8aca924bb187a18751255a294"
	Jan 09 03:16:21 kubernetes-upgrade-658000 kubelet[5182]: I0109 03:16:21.819784    5182 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb30267481429ff00e6fdf7377b5effda5956f551a61adcdc1878cd4cd86a9ad"
	Jan 09 03:16:21 kubernetes-upgrade-658000 kubelet[5182]: I0109 03:16:21.819794    5182 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f53ff005d904aa2055b401701b5af66099a1d86a2c9b8342de797bc45027a99"
	Jan 09 03:16:21 kubernetes-upgrade-658000 kubelet[5182]: I0109 03:16:21.819799    5182 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b32fe788dbff4c1cd3d1752643b6d7a8512a5184626ddc2ea3d9f0904cf33cd"
	Jan 09 03:16:21 kubernetes-upgrade-658000 kubelet[5182]: I0109 03:16:21.914530    5182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ed3c296fed60c15947806145b1d9d86-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-658000\" (UID: \"2ed3c296fed60c15947806145b1d9d86\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-658000"
	Jan 09 03:16:21 kubernetes-upgrade-658000 kubelet[5182]: I0109 03:16:21.914647    5182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ed3c296fed60c15947806145b1d9d86-etc-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-658000\" (UID: \"2ed3c296fed60c15947806145b1d9d86\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-658000"
	Jan 09 03:16:21 kubernetes-upgrade-658000 kubelet[5182]: I0109 03:16:21.914715    5182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ed3c296fed60c15947806145b1d9d86-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-658000\" (UID: \"2ed3c296fed60c15947806145b1d9d86\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-658000"
	Jan 09 03:16:21 kubernetes-upgrade-658000 kubelet[5182]: I0109 03:16:21.914757    5182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ed3c296fed60c15947806145b1d9d86-usr-local-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-658000\" (UID: \"2ed3c296fed60c15947806145b1d9d86\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-658000"
	Jan 09 03:16:21 kubernetes-upgrade-658000 kubelet[5182]: I0109 03:16:21.915031    5182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ed3c296fed60c15947806145b1d9d86-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-658000\" (UID: \"2ed3c296fed60c15947806145b1d9d86\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-658000"
	Jan 09 03:16:21 kubernetes-upgrade-658000 kubelet[5182]: I0109 03:16:21.928613    5182 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-658000"
	Jan 09 03:16:21 kubernetes-upgrade-658000 kubelet[5182]: E0109 03:16:21.928900    5182 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="kubernetes-upgrade-658000"
	Jan 09 03:16:22 kubernetes-upgrade-658000 kubelet[5182]: I0109 03:16:22.104002    5182 scope.go:117] "RemoveContainer" containerID="35e960afd88ad373a9a7707549ee105e7ec26dc4b324a538a8a35576a62598af"
	Jan 09 03:16:22 kubernetes-upgrade-658000 kubelet[5182]: I0109 03:16:22.113449    5182 scope.go:117] "RemoveContainer" containerID="1f93f714fa9feac915c09e86d799dd9d6414915b55145e867bc4be428d749e05"
	Jan 09 03:16:22 kubernetes-upgrade-658000 kubelet[5182]: I0109 03:16:22.120858    5182 scope.go:117] "RemoveContainer" containerID="ce18049da2fca6ed26badd9983b55dd52b68c68610fa8e8a675f36ffe2480859"
	Jan 09 03:16:22 kubernetes-upgrade-658000 kubelet[5182]: I0109 03:16:22.127737    5182 scope.go:117] "RemoveContainer" containerID="53ddcdf14302ae483f78b5d5dd92b326d52b009fb5c066a01acdad7a66c61176"
	Jan 09 03:16:22 kubernetes-upgrade-658000 kubelet[5182]: E0109 03:16:22.212904    5182 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-658000?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="800ms"
	Jan 09 03:16:22 kubernetes-upgrade-658000 kubelet[5182]: I0109 03:16:22.408998    5182 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-658000"
	Jan 09 03:16:22 kubernetes-upgrade-658000 kubelet[5182]: E0109 03:16:22.409582    5182 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="kubernetes-upgrade-658000"
	Jan 09 03:16:23 kubernetes-upgrade-658000 kubelet[5182]: I0109 03:16:23.217730    5182 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-658000"
	Jan 09 03:16:25 kubernetes-upgrade-658000 kubelet[5182]: I0109 03:16:25.407538    5182 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-658000"
	Jan 09 03:16:25 kubernetes-upgrade-658000 kubelet[5182]: I0109 03:16:25.407627    5182 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-658000"
	Jan 09 03:16:25 kubernetes-upgrade-658000 kubelet[5182]: I0109 03:16:25.603397    5182 apiserver.go:52] "Watching apiserver"
	Jan 09 03:16:25 kubernetes-upgrade-658000 kubelet[5182]: I0109 03:16:25.611187    5182 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-658000 -n kubernetes-upgrade-658000
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-658000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: etcd-kubernetes-upgrade-658000 kube-controller-manager-kubernetes-upgrade-658000 storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-658000 describe pod etcd-kubernetes-upgrade-658000 kube-controller-manager-kubernetes-upgrade-658000 storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-658000 describe pod etcd-kubernetes-upgrade-658000 kube-controller-manager-kubernetes-upgrade-658000 storage-provisioner: exit status 1 (55.930867ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "etcd-kubernetes-upgrade-658000" not found
	Error from server (NotFound): pods "kube-controller-manager-kubernetes-upgrade-658000" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-658000 describe pod etcd-kubernetes-upgrade-658000 kube-controller-manager-kubernetes-upgrade-658000 storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-658000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-658000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-658000: (2.474006747s)
--- FAIL: TestKubernetesUpgrade (319.52s)

                                                
                                    
x
+
TestMissingContainerUpgrade (41.86s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3998310067.exe start -p missing-upgrade-879000 --memory=2200 --driver=docker 
E0108 19:10:50.463990   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
version_upgrade_test.go:322: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3998310067.exe start -p missing-upgrade-879000 --memory=2200 --driver=docker : exit status 70 (29.906080426s)

                                                
                                                
-- stdout --
	* [missing-upgrade-879000] minikube v1.9.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17866
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17866-74927/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (12 available), Memory=2200MB (5933MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-01-09 03:10:48.134179948 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "missing-upgrade-879000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (12 available), Memory=2200MB (5933MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-01-09 03:11:01.343179822 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p missing-upgrade-879000", then "minikube start -p missing-upgrade-879000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 9.36 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 19.89 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 29.12 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 44.91 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 60.94 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 77.84 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 92.11 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 105.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 120.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 135.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 151.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 168.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 184.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 195.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 214.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 231.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 250.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 267.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 285.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 304.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 321.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 341.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 358.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 376.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 395.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 406.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 420.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 438.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 457.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 476.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 493.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 510.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 529.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-01-09 03:11:01.343179822 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:322: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3998310067.exe start -p missing-upgrade-879000 --memory=2200 --driver=docker 
version_upgrade_test.go:322: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3998310067.exe start -p missing-upgrade-879000 --memory=2200 --driver=docker : exit status 70 (2.820553983s)

                                                
                                                
-- stdout --
	* [missing-upgrade-879000] minikube v1.9.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17866
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17866-74927/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-879000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:322: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3998310067.exe start -p missing-upgrade-879000 --memory=2200 --driver=docker 
version_upgrade_test.go:322: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3998310067.exe start -p missing-upgrade-879000 --memory=2200 --driver=docker : exit status 70 (3.924985718s)

                                                
                                                
-- stdout --
	* [missing-upgrade-879000] minikube v1.9.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17866
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17866-74927/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-879000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:328: release start failed: exit status 70
panic.go:523: *** TestMissingContainerUpgrade FAILED at 2024-01-08 19:11:12.055694 -0800 PST m=+2382.190319587
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-879000
helpers_test.go:235: (dbg) docker inspect missing-upgrade-879000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6f501829f437241b520fd3aa8f40fc796ee471693c7845d898cb52dd2e9e69c",
	        "Created": "2024-01-09T03:10:56.232262792Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 197706,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-09T03:10:56.40658056Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/e6f501829f437241b520fd3aa8f40fc796ee471693c7845d898cb52dd2e9e69c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6f501829f437241b520fd3aa8f40fc796ee471693c7845d898cb52dd2e9e69c/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6f501829f437241b520fd3aa8f40fc796ee471693c7845d898cb52dd2e9e69c/hosts",
	        "LogPath": "/var/lib/docker/containers/e6f501829f437241b520fd3aa8f40fc796ee471693c7845d898cb52dd2e9e69c/e6f501829f437241b520fd3aa8f40fc796ee471693c7845d898cb52dd2e9e69c-json.log",
	        "Name": "/missing-upgrade-879000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-879000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/befe66fab802bb4e8c1d6d9774b90c18a2ae3a3dfac32ddf7dc0da7588d5e589-init/diff:/var/lib/docker/overlay2/3a6290af1f9fb00a167faa49909a8c5ab47438b8c78b73bc73069ba1a8dc9df5/diff:/var/lib/docker/overlay2/ec2f3de792bcb0fe1a578a749fb0e0627e42025ade20dc76660b59a69331712f/diff:/var/lib/docker/overlay2/327a876bdb94c462457ec170672bc17753f2a6bcd4c629c13d9fee2e4b0d0f5f/diff:/var/lib/docker/overlay2/7da6105b2a85fd0072ee2938b24fec478fe62881146e2ade5b30f53e36ef5442/diff:/var/lib/docker/overlay2/224e6d58c682206b52f22cec4bf8069c38a1adc99ddc098643e156487a1a483c/diff:/var/lib/docker/overlay2/cd5bc72add223e92068d5f54ba89258d0a795e49fa3e1a1ead8a40d49ea66b10/diff:/var/lib/docker/overlay2/d9c68b5ebb2703a3dd61fca199b2de6997cd89decf05b9b3b0875308650ca009/diff:/var/lib/docker/overlay2/5f7ddf4b0a05f9d640ea75a17ced3d1827200ba230efa895044a71b42088c4aa/diff:/var/lib/docker/overlay2/81f74dd343a22db97bc46012b61dc4cdc49486c195ba34bf2a4e5b4d1ec62d9d/diff:/var/lib/docker/overlay2/2934ed
71f19dbd83587218f6f0a407ea57758149bdfebb7f500ea6e59734a17a/diff:/var/lib/docker/overlay2/18d80c6d16684bccdfb9068bf694fc3d0ab2612293c6c92fd8a2876368c74e07/diff:/var/lib/docker/overlay2/977962fd7a5a4b475269e3e7fdbf6f1e1b349b5ece0ce6ba059252668e838545/diff:/var/lib/docker/overlay2/1f468b4d8b733cec7aa70a0128bb91b51ae359d0466095da2971a0763d4313be/diff:/var/lib/docker/overlay2/4d17689f340896451f73daf938bf76c3eb5684beff10710afa8eaf29de3e871d/diff:/var/lib/docker/overlay2/6d35290a238ea19a190e50167ac1f5cacb4440903f8124a419db0f000708c68e/diff:/var/lib/docker/overlay2/418a9b09b84c93accc660ec34ede73050503924dfecdc4923c5ceedc8fccf224/diff:/var/lib/docker/overlay2/9ef48ea3e759c63e32e849b1bad501c7d587982611f2d2c435404f51ac5c389a/diff:/var/lib/docker/overlay2/4c26904d7773ed9456b18068ad648cda4206f2dc68bde11eb0183e3ef59c996e/diff:/var/lib/docker/overlay2/433bace7b63d8cd8311577d2ea0240f82c746323c0b0fc5d86564aabab99b79e/diff:/var/lib/docker/overlay2/b3bb26e7399d8cdabb735371247852fe18aa7d55968ec7235d91b03ea0ce1002/diff:/var/lib/d
ocker/overlay2/7b262596826f20b97d430eceb9b5dd9815629c01caa944ea2c55fa69def45d14/diff",
	                "MergedDir": "/var/lib/docker/overlay2/befe66fab802bb4e8c1d6d9774b90c18a2ae3a3dfac32ddf7dc0da7588d5e589/merged",
	                "UpperDir": "/var/lib/docker/overlay2/befe66fab802bb4e8c1d6d9774b90c18a2ae3a3dfac32ddf7dc0da7588d5e589/diff",
	                "WorkDir": "/var/lib/docker/overlay2/befe66fab802bb4e8c1d6d9774b90c18a2ae3a3dfac32ddf7dc0da7588d5e589/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-879000",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-879000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-879000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-879000",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-879000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4c6976c454b81ddbdfd6d31a86ba28689e37fc065c6c04c113a110c134237769",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "64741"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "64742"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "64743"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4c6976c454b8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "91b5267e6e74dfb9a0b1d7991879b98587aab4943345c154d0b5df504d8001e3",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "NetworkID": "186365f5f9476a341bd6e61888a3e950b0d05a8881bb17afb5c533a1be684d09",
	                    "EndpointID": "91b5267e6e74dfb9a0b1d7991879b98587aab4943345c154d0b5df504d8001e3",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-879000 -n missing-upgrade-879000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-879000 -n missing-upgrade-879000: exit status 6 (365.865342ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 19:11:12.464077   84320 status.go:415] kubeconfig endpoint: extract IP: "missing-upgrade-879000" does not appear in /Users/jenkins/minikube-integration/17866-74927/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-879000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-879000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-879000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-879000: (2.130904863s)
--- FAIL: TestMissingContainerUpgrade (41.86s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (41.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.1993688225.exe start -p stopped-upgrade-702000 --memory=2200 --vm-driver=docker 
E0108 19:12:48.533177   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/skaffold-709000/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.1993688225.exe start -p stopped-upgrade-702000 --memory=2200 --vm-driver=docker : exit status 70 (29.811381112s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-702000] minikube v1.9.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig3432228971
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (12 available), Memory=2200MB (5933MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-01-09 03:12:48.978382940 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "stopped-upgrade-702000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (12 available), Memory=2200MB (5933MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-01-09 03:13:02.334382813 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p stopped-upgrade-702000", then "minikube start -p stopped-upgrade-702000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 12.02 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 26.45 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 42.44 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 57.19 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 72.41 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 88.22 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 107.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 120.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 137.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 157.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 175.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 195.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 214.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 233.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 252.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 271.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 291.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 309.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 326.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 344.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 364.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 382.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 401.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 419.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 438.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 450.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 464.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 483.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 500.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 520.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 539.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-01-09 03:13:02.334382813 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.1993688225.exe start -p stopped-upgrade-702000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.1993688225.exe start -p stopped-upgrade-702000 --memory=2200 --vm-driver=docker : exit status 70 (3.985325577s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-702000] minikube v1.9.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig2244413155
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-702000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.1993688225.exe start -p stopped-upgrade-702000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.1993688225.exe start -p stopped-upgrade-702000 --memory=2200 --vm-driver=docker : exit status 70 (3.984950293s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-702000] minikube v1.9.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig934843850
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-702000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:202: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (41.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (253.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-901000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0108 19:22:23.418902   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/client.crt: no such file or directory
E0108 19:22:25.706048   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kindnet-798000/client.crt: no such file or directory
E0108 19:22:25.711655   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kindnet-798000/client.crt: no such file or directory
E0108 19:22:25.721852   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kindnet-798000/client.crt: no such file or directory
E0108 19:22:25.742652   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kindnet-798000/client.crt: no such file or directory
E0108 19:22:25.782779   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kindnet-798000/client.crt: no such file or directory
E0108 19:22:25.862912   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kindnet-798000/client.crt: no such file or directory
E0108 19:22:26.023331   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kindnet-798000/client.crt: no such file or directory
E0108 19:22:26.343652   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kindnet-798000/client.crt: no such file or directory
E0108 19:22:26.983844   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kindnet-798000/client.crt: no such file or directory
E0108 19:22:28.264160   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kindnet-798000/client.crt: no such file or directory
E0108 19:22:30.824240   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kindnet-798000/client.crt: no such file or directory
E0108 19:22:35.944243   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kindnet-798000/client.crt: no such file or directory
E0108 19:22:43.899545   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/client.crt: no such file or directory
E0108 19:22:46.185764   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kindnet-798000/client.crt: no such file or directory
E0108 19:23:06.665464   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kindnet-798000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-901000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m13.080370557s)

                                                
                                                
-- stdout --
	* [old-k8s-version-901000] minikube v1.32.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17866
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17866-74927/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-901000 in cluster old-k8s-version-901000
	* Pulling base image v0.0.42-1704759386-17866 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 19:22:23.388239   90261 out.go:296] Setting OutFile to fd 1 ...
	I0108 19:22:23.388571   90261 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 19:22:23.388583   90261 out.go:309] Setting ErrFile to fd 2...
	I0108 19:22:23.388588   90261 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 19:22:23.388814   90261 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17866-74927/.minikube/bin
	I0108 19:22:23.390417   90261 out.go:303] Setting JSON to false
	I0108 19:22:23.414241   90261 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":37315,"bootTime":1704733228,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0108 19:22:23.414348   90261 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0108 19:22:23.435653   90261 out.go:177] * [old-k8s-version-901000] minikube v1.32.0 on Darwin 14.2.1
	I0108 19:22:23.477212   90261 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 19:22:23.498445   90261 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17866-74927/kubeconfig
	I0108 19:22:23.477305   90261 notify.go:220] Checking for updates...
	I0108 19:22:23.540170   90261 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 19:22:23.561291   90261 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 19:22:23.582436   90261 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube
	I0108 19:22:23.603228   90261 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 19:22:23.624945   90261 config.go:182] Loaded profile config "kubenet-798000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 19:22:23.625046   90261 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 19:22:23.683789   90261 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0108 19:22:23.683953   90261 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 19:22:23.793522   90261 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:68 SystemTime:2024-01-09 03:22:23.782697356 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0108 19:22:23.835736   90261 out.go:177] * Using the docker driver based on user configuration
	I0108 19:22:23.856767   90261 start.go:298] selected driver: docker
	I0108 19:22:23.856780   90261 start.go:902] validating driver "docker" against <nil>
	I0108 19:22:23.856788   90261 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 19:22:23.860191   90261 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 19:22:23.969312   90261 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:68 SystemTime:2024-01-09 03:22:23.959744735 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0108 19:22:23.969510   90261 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0108 19:22:23.969694   90261 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 19:22:23.991201   90261 out.go:177] * Using Docker Desktop driver with root privileges
	I0108 19:22:24.011933   90261 cni.go:84] Creating CNI manager for ""
	I0108 19:22:24.011951   90261 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0108 19:22:24.011962   90261 start_flags.go:321] config:
	{Name:old-k8s-version-901000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-901000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 19:22:24.032955   90261 out.go:177] * Starting control plane node old-k8s-version-901000 in cluster old-k8s-version-901000
	I0108 19:22:24.074754   90261 cache.go:121] Beginning downloading kic base image for docker with docker
	I0108 19:22:24.095925   90261 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0108 19:22:24.116991   90261 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0108 19:22:24.117014   90261 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0108 19:22:24.117032   90261 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0108 19:22:24.117045   90261 cache.go:56] Caching tarball of preloaded images
	I0108 19:22:24.117168   90261 preload.go:174] Found /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 19:22:24.117178   90261 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0108 19:22:24.117606   90261 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/config.json ...
	I0108 19:22:24.117759   90261 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/config.json: {Name:mk061b505f4d8e86d78dcdcffd8710c353990da4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:22:24.169504   90261 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0108 19:22:24.169522   90261 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0108 19:22:24.169544   90261 cache.go:194] Successfully downloaded all kic artifacts
	I0108 19:22:24.169588   90261 start.go:365] acquiring machines lock for old-k8s-version-901000: {Name:mk41257d6f9820536f749153d111ded94c6d377e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 19:22:24.169770   90261 start.go:369] acquired machines lock for "old-k8s-version-901000" in 168.915µs
	I0108 19:22:24.169798   90261 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-901000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-901000 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0108 19:22:24.169954   90261 start.go:125] createHost starting for "" (driver="docker")
	I0108 19:22:24.214407   90261 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0108 19:22:24.214616   90261 start.go:159] libmachine.API.Create for "old-k8s-version-901000" (driver="docker")
	I0108 19:22:24.214644   90261 client.go:168] LocalClient.Create starting
	I0108 19:22:24.214747   90261 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem
	I0108 19:22:24.214794   90261 main.go:141] libmachine: Decoding PEM data...
	I0108 19:22:24.214813   90261 main.go:141] libmachine: Parsing certificate...
	I0108 19:22:24.214870   90261 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem
	I0108 19:22:24.214905   90261 main.go:141] libmachine: Decoding PEM data...
	I0108 19:22:24.214913   90261 main.go:141] libmachine: Parsing certificate...
	I0108 19:22:24.215381   90261 cli_runner.go:164] Run: docker network inspect old-k8s-version-901000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0108 19:22:24.267770   90261 cli_runner.go:211] docker network inspect old-k8s-version-901000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0108 19:22:24.267866   90261 network_create.go:281] running [docker network inspect old-k8s-version-901000] to gather additional debugging logs...
	I0108 19:22:24.267885   90261 cli_runner.go:164] Run: docker network inspect old-k8s-version-901000
	W0108 19:22:24.320255   90261 cli_runner.go:211] docker network inspect old-k8s-version-901000 returned with exit code 1
	I0108 19:22:24.320283   90261 network_create.go:284] error running [docker network inspect old-k8s-version-901000]: docker network inspect old-k8s-version-901000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-901000 not found
	I0108 19:22:24.320303   90261 network_create.go:286] output of [docker network inspect old-k8s-version-901000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-901000 not found
	
	** /stderr **
	I0108 19:22:24.320448   90261 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 19:22:24.374576   90261 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0108 19:22:24.376146   90261 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0108 19:22:24.377717   90261 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0108 19:22:24.379088   90261 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0108 19:22:24.379469   90261 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002404c90}
	I0108 19:22:24.379485   90261 network_create.go:124] attempt to create docker network old-k8s-version-901000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0108 19:22:24.379557   90261 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-901000 old-k8s-version-901000
	I0108 19:22:24.468838   90261 network_create.go:108] docker network old-k8s-version-901000 192.168.85.0/24 created
	I0108 19:22:24.468904   90261 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-901000" container
	I0108 19:22:24.469007   90261 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 19:22:24.521373   90261 cli_runner.go:164] Run: docker volume create old-k8s-version-901000 --label name.minikube.sigs.k8s.io=old-k8s-version-901000 --label created_by.minikube.sigs.k8s.io=true
	I0108 19:22:24.574175   90261 oci.go:103] Successfully created a docker volume old-k8s-version-901000
	I0108 19:22:24.574292   90261 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-901000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-901000 --entrypoint /usr/bin/test -v old-k8s-version-901000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0108 19:22:25.105998   90261 oci.go:107] Successfully prepared a docker volume old-k8s-version-901000
	I0108 19:22:25.106044   90261 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0108 19:22:25.106059   90261 kic.go:194] Starting extracting preloaded images to volume ...
	I0108 19:22:25.106178   90261 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-901000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0108 19:22:27.493466   90261 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-901000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (2.3872886s)
	I0108 19:22:27.493495   90261 kic.go:203] duration metric: took 2.387489 seconds to extract preloaded images to volume
	I0108 19:22:27.493597   90261 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 19:22:27.619159   90261 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-901000 --name old-k8s-version-901000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-901000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-901000 --network old-k8s-version-901000 --ip 192.168.85.2 --volume old-k8s-version-901000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0108 19:22:27.958607   90261 cli_runner.go:164] Run: docker container inspect old-k8s-version-901000 --format={{.State.Running}}
	I0108 19:22:28.025802   90261 cli_runner.go:164] Run: docker container inspect old-k8s-version-901000 --format={{.State.Status}}
	I0108 19:22:28.087434   90261 cli_runner.go:164] Run: docker exec old-k8s-version-901000 stat /var/lib/dpkg/alternatives/iptables
	I0108 19:22:28.264991   90261 oci.go:144] the created container "old-k8s-version-901000" has a running status.
	I0108 19:22:28.265023   90261 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/old-k8s-version-901000/id_rsa...
	I0108 19:22:28.656732   90261 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/old-k8s-version-901000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 19:22:28.718865   90261 cli_runner.go:164] Run: docker container inspect old-k8s-version-901000 --format={{.State.Status}}
	I0108 19:22:28.776936   90261 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 19:22:28.776957   90261 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-901000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 19:22:28.877048   90261 cli_runner.go:164] Run: docker container inspect old-k8s-version-901000 --format={{.State.Status}}
	I0108 19:22:28.930425   90261 machine.go:88] provisioning docker machine ...
	I0108 19:22:28.930483   90261 ubuntu.go:169] provisioning hostname "old-k8s-version-901000"
	I0108 19:22:28.930586   90261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-901000
	I0108 19:22:28.981587   90261 main.go:141] libmachine: Using SSH client type: native
	I0108 19:22:28.981915   90261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 49929 <nil> <nil>}
	I0108 19:22:28.981927   90261 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-901000 && echo "old-k8s-version-901000" | sudo tee /etc/hostname
	I0108 19:22:29.124159   90261 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-901000
	
	I0108 19:22:29.124260   90261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-901000
	I0108 19:22:29.176984   90261 main.go:141] libmachine: Using SSH client type: native
	I0108 19:22:29.177283   90261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 49929 <nil> <nil>}
	I0108 19:22:29.177296   90261 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-901000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-901000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-901000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 19:22:29.310684   90261 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 19:22:29.310705   90261 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17866-74927/.minikube CaCertPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17866-74927/.minikube}
	I0108 19:22:29.310727   90261 ubuntu.go:177] setting up certificates
	I0108 19:22:29.310735   90261 provision.go:83] configureAuth start
	I0108 19:22:29.310807   90261 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-901000
	I0108 19:22:29.361767   90261 provision.go:138] copyHostCerts
	I0108 19:22:29.361869   90261 exec_runner.go:144] found /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.pem, removing ...
	I0108 19:22:29.361878   90261 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.pem
	I0108 19:22:29.361998   90261 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.pem (1078 bytes)
	I0108 19:22:29.362215   90261 exec_runner.go:144] found /Users/jenkins/minikube-integration/17866-74927/.minikube/cert.pem, removing ...
	I0108 19:22:29.362222   90261 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17866-74927/.minikube/cert.pem
	I0108 19:22:29.362295   90261 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17866-74927/.minikube/cert.pem (1123 bytes)
	I0108 19:22:29.362458   90261 exec_runner.go:144] found /Users/jenkins/minikube-integration/17866-74927/.minikube/key.pem, removing ...
	I0108 19:22:29.362464   90261 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17866-74927/.minikube/key.pem
	I0108 19:22:29.362537   90261 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17866-74927/.minikube/key.pem (1679 bytes)
	I0108 19:22:29.362681   90261 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-901000 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-901000]
	I0108 19:22:29.475890   90261 provision.go:172] copyRemoteCerts
	I0108 19:22:29.475949   90261 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 19:22:29.476002   90261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-901000
	I0108 19:22:29.527576   90261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49929 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/old-k8s-version-901000/id_rsa Username:docker}
	I0108 19:22:29.622168   90261 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 19:22:29.645591   90261 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 19:22:29.668531   90261 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 19:22:29.690992   90261 provision.go:86] duration metric: configureAuth took 380.252038ms
	I0108 19:22:29.691007   90261 ubuntu.go:193] setting minikube options for container-runtime
	I0108 19:22:29.691156   90261 config.go:182] Loaded profile config "old-k8s-version-901000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0108 19:22:29.691222   90261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-901000
	I0108 19:22:29.751561   90261 main.go:141] libmachine: Using SSH client type: native
	I0108 19:22:29.751927   90261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 49929 <nil> <nil>}
	I0108 19:22:29.751944   90261 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 19:22:29.888237   90261 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0108 19:22:29.888253   90261 ubuntu.go:71] root file system type: overlay
	I0108 19:22:29.888386   90261 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 19:22:29.888486   90261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-901000
	I0108 19:22:29.951221   90261 main.go:141] libmachine: Using SSH client type: native
	I0108 19:22:29.951616   90261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 49929 <nil> <nil>}
	I0108 19:22:29.951672   90261 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 19:22:30.096754   90261 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 19:22:30.096856   90261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-901000
	I0108 19:22:30.149094   90261 main.go:141] libmachine: Using SSH client type: native
	I0108 19:22:30.149385   90261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 49929 <nil> <nil>}
	I0108 19:22:30.149398   90261 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 19:22:30.718594   90261 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:22.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-01-09 03:22:30.094959835 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0108 19:22:30.718629   90261 machine.go:91] provisioned docker machine in 1.788211026s
	I0108 19:22:30.718638   90261 client.go:171] LocalClient.Create took 6.504130733s
	I0108 19:22:30.718680   90261 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-901000" took 6.504204726s
	I0108 19:22:30.718693   90261 start.go:300] post-start starting for "old-k8s-version-901000" (driver="docker")
	I0108 19:22:30.718703   90261 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 19:22:30.718795   90261 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 19:22:30.718888   90261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-901000
	I0108 19:22:30.775848   90261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49929 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/old-k8s-version-901000/id_rsa Username:docker}
	I0108 19:22:30.870010   90261 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 19:22:30.874631   90261 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 19:22:30.874655   90261 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 19:22:30.874663   90261 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 19:22:30.874668   90261 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0108 19:22:30.874684   90261 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17866-74927/.minikube/addons for local assets ...
	I0108 19:22:30.874791   90261 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17866-74927/.minikube/files for local assets ...
	I0108 19:22:30.874985   90261 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem -> 753692.pem in /etc/ssl/certs
	I0108 19:22:30.875196   90261 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 19:22:30.883400   90261 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem --> /etc/ssl/certs/753692.pem (1708 bytes)
	I0108 19:22:30.903840   90261 start.go:303] post-start completed in 185.1418ms
	I0108 19:22:30.904418   90261 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-901000
	I0108 19:22:30.956155   90261 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/config.json ...
	I0108 19:22:30.956621   90261 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 19:22:30.956679   90261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-901000
	I0108 19:22:31.007721   90261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49929 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/old-k8s-version-901000/id_rsa Username:docker}
	I0108 19:22:31.099996   90261 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 19:22:31.104741   90261 start.go:128] duration metric: createHost completed in 6.93492105s
	I0108 19:22:31.104763   90261 start.go:83] releasing machines lock for "old-k8s-version-901000", held for 6.935137574s
	I0108 19:22:31.104856   90261 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-901000
	I0108 19:22:31.156547   90261 ssh_runner.go:195] Run: cat /version.json
	I0108 19:22:31.156571   90261 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 19:22:31.156627   90261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-901000
	I0108 19:22:31.156654   90261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-901000
	I0108 19:22:31.210419   90261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49929 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/old-k8s-version-901000/id_rsa Username:docker}
	I0108 19:22:31.210415   90261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49929 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/old-k8s-version-901000/id_rsa Username:docker}
	I0108 19:22:31.305282   90261 ssh_runner.go:195] Run: systemctl --version
	I0108 19:22:31.453114   90261 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 19:22:31.458272   90261 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0108 19:22:31.484603   90261 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0108 19:22:31.484709   90261 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0108 19:22:31.503497   90261 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0108 19:22:31.522324   90261 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 19:22:31.522342   90261 start.go:475] detecting cgroup driver to use...
	I0108 19:22:31.522355   90261 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 19:22:31.522466   90261 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 19:22:31.539362   90261 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0108 19:22:31.550183   90261 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0108 19:22:31.560612   90261 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0108 19:22:31.560702   90261 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0108 19:22:31.574470   90261 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 19:22:31.586122   90261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0108 19:22:31.595219   90261 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 19:22:31.604316   90261 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 19:22:31.616280   90261 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0108 19:22:31.630668   90261 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 19:22:31.643924   90261 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 19:22:31.654190   90261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 19:22:31.703319   90261 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 19:22:31.868950   90261 start.go:475] detecting cgroup driver to use...
	I0108 19:22:31.868973   90261 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 19:22:31.869042   90261 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 19:22:31.893756   90261 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0108 19:22:31.893823   90261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 19:22:31.905317   90261 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 19:22:31.927805   90261 ssh_runner.go:195] Run: which cri-dockerd
	I0108 19:22:31.934265   90261 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0108 19:22:31.946227   90261 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0108 19:22:31.968200   90261 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 19:22:32.050920   90261 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 19:22:32.111809   90261 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I0108 19:22:32.111899   90261 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0108 19:22:32.129267   90261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 19:22:32.201595   90261 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 19:22:32.438737   90261 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 19:22:32.461724   90261 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 19:22:32.508678   90261 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	I0108 19:22:32.508762   90261 cli_runner.go:164] Run: docker exec -t old-k8s-version-901000 dig +short host.docker.internal
	I0108 19:22:32.624551   90261 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0108 19:22:32.624651   90261 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0108 19:22:32.629514   90261 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 19:22:32.641958   90261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-901000
	I0108 19:22:32.695910   90261 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0108 19:22:32.695991   90261 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 19:22:32.714518   90261 docker.go:671] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0108 19:22:32.714550   90261 docker.go:677] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0108 19:22:32.714617   90261 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0108 19:22:32.724567   90261 ssh_runner.go:195] Run: which lz4
	I0108 19:22:32.729846   90261 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0108 19:22:32.734593   90261 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 19:22:32.734627   90261 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0108 19:22:37.804168   90261 docker.go:635] Took 5.074499 seconds to copy over tarball
	I0108 19:22:37.804246   90261 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 19:22:39.364005   90261 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.559761147s)
	I0108 19:22:39.364025   90261 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 19:22:39.402360   90261 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0108 19:22:39.410841   90261 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0108 19:22:39.426286   90261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 19:22:39.481164   90261 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 19:22:40.002417   90261 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 19:22:40.020780   90261 docker.go:671] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0108 19:22:40.020796   90261 docker.go:677] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0108 19:22:40.020807   90261 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0108 19:22:40.025921   90261 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 19:22:40.026139   90261 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0108 19:22:40.026255   90261 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0108 19:22:40.026372   90261 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 19:22:40.026449   90261 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0108 19:22:40.026454   90261 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0108 19:22:40.026708   90261 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0108 19:22:40.027003   90261 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0108 19:22:40.032619   90261 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0108 19:22:40.032635   90261 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0108 19:22:40.032637   90261 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 19:22:40.034040   90261 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0108 19:22:40.033967   90261 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0108 19:22:40.034158   90261 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 19:22:40.034638   90261 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0108 19:22:40.034802   90261 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0108 19:22:40.441037   90261 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0108 19:22:40.459710   90261 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0108 19:22:40.459773   90261 docker.go:323] Removing image: registry.k8s.io/pause:3.1
	I0108 19:22:40.459832   90261 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0108 19:22:40.465132   90261 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0108 19:22:40.480336   90261 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0108 19:22:40.486605   90261 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0108 19:22:40.488102   90261 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0108 19:22:40.488140   90261 docker.go:323] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0108 19:22:40.488195   90261 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0108 19:22:40.506704   90261 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0108 19:22:40.506744   90261 docker.go:323] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0108 19:22:40.506826   90261 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0108 19:22:40.508852   90261 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0108 19:22:40.526332   90261 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0108 19:22:40.566797   90261 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 19:22:40.587067   90261 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0108 19:22:40.587098   90261 docker.go:323] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 19:22:40.587172   90261 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 19:22:40.592555   90261 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0108 19:22:40.606592   90261 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0108 19:22:40.612067   90261 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0108 19:22:40.612098   90261 docker.go:323] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0108 19:22:40.612158   90261 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0108 19:22:40.629329   90261 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0108 19:22:40.659412   90261 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0108 19:22:40.678883   90261 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0108 19:22:40.678909   90261 docker.go:323] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0108 19:22:40.678976   90261 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0108 19:22:40.697215   90261 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0108 19:22:40.765414   90261 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0108 19:22:40.779755   90261 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 19:22:40.784934   90261 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0108 19:22:40.784960   90261 docker.go:323] Removing image: registry.k8s.io/coredns:1.6.2
	I0108 19:22:40.785029   90261 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0108 19:22:40.804576   90261 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0108 19:22:40.804625   90261 cache_images.go:92] LoadImages completed in 783.824842ms
	W0108 19:22:40.804681   90261 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	I0108 19:22:40.804754   90261 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0108 19:22:40.853018   90261 cni.go:84] Creating CNI manager for ""
	I0108 19:22:40.853035   90261 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0108 19:22:40.853059   90261 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 19:22:40.853076   90261 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-901000 NodeName:old-k8s-version-901000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0108 19:22:40.853171   90261 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-901000"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-901000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.85.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 19:22:40.853223   90261 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-901000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-901000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 19:22:40.853285   90261 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0108 19:22:40.861915   90261 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 19:22:40.861967   90261 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 19:22:40.870439   90261 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0108 19:22:40.885939   90261 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 19:22:40.901497   90261 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0108 19:22:40.917307   90261 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0108 19:22:40.921479   90261 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 19:22:40.932137   90261 certs.go:56] Setting up /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000 for IP: 192.168.85.2
	I0108 19:22:40.932157   90261 certs.go:190] acquiring lock for shared ca certs: {Name:mk44dcbca6ce5cf77b3bf5ce2248b699d6553e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:22:40.932351   90261 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.key
	I0108 19:22:40.932442   90261 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.key
	I0108 19:22:40.932493   90261 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/client.key
	I0108 19:22:40.932506   90261 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/client.crt with IP's: []
	I0108 19:22:41.034299   90261 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/client.crt ...
	I0108 19:22:41.034317   90261 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/client.crt: {Name:mk24eabae0a29cb5887008a89084d1c1d755d51d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:22:41.034689   90261 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/client.key ...
	I0108 19:22:41.034700   90261 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/client.key: {Name:mk943257cae6de7d470fe92ce94386415a15a8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:22:41.034944   90261 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/apiserver.key.43b9df8c
	I0108 19:22:41.034958   90261 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 19:22:41.158656   90261 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/apiserver.crt.43b9df8c ...
	I0108 19:22:41.158678   90261 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/apiserver.crt.43b9df8c: {Name:mk3a9f5dc852e9dd2c6775f84cc4335888943edf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:22:41.158994   90261 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/apiserver.key.43b9df8c ...
	I0108 19:22:41.159005   90261 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/apiserver.key.43b9df8c: {Name:mkfbe71ab17b2d996bcacf2a33994804a162980d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:22:41.159208   90261 certs.go:337] copying /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/apiserver.crt.43b9df8c -> /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/apiserver.crt
	I0108 19:22:41.159397   90261 certs.go:341] copying /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/apiserver.key.43b9df8c -> /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/apiserver.key
	I0108 19:22:41.159574   90261 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/proxy-client.key
	I0108 19:22:41.159590   90261 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/proxy-client.crt with IP's: []
	I0108 19:22:41.381399   90261 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/proxy-client.crt ...
	I0108 19:22:41.381411   90261 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/proxy-client.crt: {Name:mkbb92511f1b368a85dc132abf0f034d3a9d7e0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:22:41.381714   90261 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/proxy-client.key ...
	I0108 19:22:41.381723   90261 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/proxy-client.key: {Name:mk1f8b1cfb399fdd9a7556519b654b27742d267b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:22:41.382128   90261 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/75369.pem (1338 bytes)
	W0108 19:22:41.382179   90261 certs.go:433] ignoring /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/75369_empty.pem, impossibly tiny 0 bytes
	I0108 19:22:41.382192   90261 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 19:22:41.382224   90261 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem (1078 bytes)
	I0108 19:22:41.382255   90261 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem (1123 bytes)
	I0108 19:22:41.382284   90261 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/key.pem (1679 bytes)
	I0108 19:22:41.382345   90261 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem (1708 bytes)
	I0108 19:22:41.382866   90261 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 19:22:41.403589   90261 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 19:22:41.423991   90261 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 19:22:41.444513   90261 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 19:22:41.465325   90261 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 19:22:41.485762   90261 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 19:22:41.506509   90261 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 19:22:41.527108   90261 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 19:22:41.548427   90261 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/75369.pem --> /usr/share/ca-certificates/75369.pem (1338 bytes)
	I0108 19:22:41.570446   90261 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem --> /usr/share/ca-certificates/753692.pem (1708 bytes)
	I0108 19:22:41.593567   90261 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 19:22:41.614544   90261 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 19:22:41.630620   90261 ssh_runner.go:195] Run: openssl version
	I0108 19:22:41.636409   90261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75369.pem && ln -fs /usr/share/ca-certificates/75369.pem /etc/ssl/certs/75369.pem"
	I0108 19:22:41.645616   90261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75369.pem
	I0108 19:22:41.649881   90261 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  9 02:38 /usr/share/ca-certificates/75369.pem
	I0108 19:22:41.649933   90261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75369.pem
	I0108 19:22:41.656596   90261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/75369.pem /etc/ssl/certs/51391683.0"
	I0108 19:22:41.665524   90261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/753692.pem && ln -fs /usr/share/ca-certificates/753692.pem /etc/ssl/certs/753692.pem"
	I0108 19:22:41.674643   90261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/753692.pem
	I0108 19:22:41.678774   90261 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  9 02:38 /usr/share/ca-certificates/753692.pem
	I0108 19:22:41.678818   90261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/753692.pem
	I0108 19:22:41.685269   90261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/753692.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 19:22:41.694309   90261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 19:22:41.703177   90261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 19:22:41.707294   90261 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  9 02:33 /usr/share/ca-certificates/minikubeCA.pem
	I0108 19:22:41.707354   90261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 19:22:41.714112   90261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 19:22:41.723422   90261 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 19:22:41.727637   90261 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 19:22:41.727693   90261 kubeadm.go:404] StartCluster: {Name:old-k8s-version-901000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-901000 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 19:22:41.727827   90261 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 19:22:41.748902   90261 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 19:22:41.757828   90261 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 19:22:41.766620   90261 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0108 19:22:41.766674   90261 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 19:22:41.774862   90261 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 19:22:41.774898   90261 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 19:22:41.852969   90261 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0108 19:22:41.853008   90261 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 19:22:42.106001   90261 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 19:22:42.106087   90261 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 19:22:42.106170   90261 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 19:22:42.273462   90261 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 19:22:42.274393   90261 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 19:22:42.280195   90261 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0108 19:22:42.347672   90261 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 19:22:42.369322   90261 out.go:204]   - Generating certificates and keys ...
	I0108 19:22:42.369403   90261 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 19:22:42.369464   90261 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 19:22:42.455196   90261 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 19:22:42.516790   90261 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 19:22:42.603949   90261 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 19:22:42.736117   90261 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 19:22:42.895025   90261 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 19:22:42.895140   90261 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-901000 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0108 19:22:43.066552   90261 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 19:22:43.066665   90261 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-901000 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0108 19:22:43.226978   90261 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 19:22:43.402565   90261 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 19:22:43.607173   90261 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 19:22:43.607228   90261 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 19:22:43.678519   90261 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 19:22:43.863740   90261 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 19:22:44.112545   90261 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 19:22:44.319139   90261 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 19:22:44.319618   90261 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 19:22:44.342751   90261 out.go:204]   - Booting up control plane ...
	I0108 19:22:44.342888   90261 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 19:22:44.343006   90261 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 19:22:44.343125   90261 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 19:22:44.343279   90261 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 19:22:44.343518   90261 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 19:23:24.328211   90261 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0108 19:23:24.328990   90261 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:23:24.329199   90261 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:23:29.329523   90261 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:23:29.329701   90261 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:23:39.330084   90261 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:23:39.330264   90261 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:23:59.331588   90261 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:23:59.331857   90261 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:24:39.332272   90261 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:24:39.332510   90261 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:24:39.332524   90261 kubeadm.go:322] 
	I0108 19:24:39.332561   90261 kubeadm.go:322] Unfortunately, an error has occurred:
	I0108 19:24:39.332606   90261 kubeadm.go:322] 	timed out waiting for the condition
	I0108 19:24:39.332614   90261 kubeadm.go:322] 
	I0108 19:24:39.332647   90261 kubeadm.go:322] This error is likely caused by:
	I0108 19:24:39.332678   90261 kubeadm.go:322] 	- The kubelet is not running
	I0108 19:24:39.332837   90261 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0108 19:24:39.332852   90261 kubeadm.go:322] 
	I0108 19:24:39.332967   90261 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0108 19:24:39.333001   90261 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0108 19:24:39.333034   90261 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0108 19:24:39.333043   90261 kubeadm.go:322] 
	I0108 19:24:39.333182   90261 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0108 19:24:39.333285   90261 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0108 19:24:39.333377   90261 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0108 19:24:39.333438   90261 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0108 19:24:39.333514   90261 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0108 19:24:39.333553   90261 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0108 19:24:39.335204   90261 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0108 19:24:39.335278   90261 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0108 19:24:39.335398   90261 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0108 19:24:39.335482   90261 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 19:24:39.335577   90261 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0108 19:24:39.335634   90261 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0108 19:24:39.335755   90261 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-901000 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-901000 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-901000 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-901000 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0108 19:24:39.335793   90261 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0108 19:24:39.743634   90261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 19:24:39.754374   90261 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0108 19:24:39.754436   90261 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 19:24:39.762542   90261 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 19:24:39.762564   90261 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 19:24:39.817331   90261 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0108 19:24:39.817374   90261 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 19:24:40.052781   90261 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 19:24:40.052874   90261 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 19:24:40.052957   90261 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 19:24:40.222533   90261 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 19:24:40.223470   90261 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 19:24:40.229493   90261 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0108 19:24:40.306605   90261 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 19:24:40.327951   90261 out.go:204]   - Generating certificates and keys ...
	I0108 19:24:40.328012   90261 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 19:24:40.328078   90261 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 19:24:40.328143   90261 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 19:24:40.328201   90261 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0108 19:24:40.328273   90261 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 19:24:40.328325   90261 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0108 19:24:40.328371   90261 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0108 19:24:40.328411   90261 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0108 19:24:40.328471   90261 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 19:24:40.328534   90261 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 19:24:40.328570   90261 kubeadm.go:322] [certs] Using the existing "sa" key
	I0108 19:24:40.328611   90261 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 19:24:40.485049   90261 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 19:24:40.553279   90261 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 19:24:40.616243   90261 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 19:24:40.893148   90261 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 19:24:40.893653   90261 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 19:24:40.915042   90261 out.go:204]   - Booting up control plane ...
	I0108 19:24:40.915170   90261 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 19:24:40.915316   90261 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 19:24:40.915442   90261 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 19:24:40.915577   90261 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 19:24:40.915867   90261 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 19:25:20.901459   90261 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0108 19:25:20.902173   90261 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:25:20.902329   90261 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:25:25.903233   90261 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:25:25.903509   90261 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:25:35.904481   90261 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:25:35.904721   90261 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:25:55.905403   90261 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:25:55.905621   90261 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:26:35.905184   90261 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:26:35.905403   90261 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:26:35.905416   90261 kubeadm.go:322] 
	I0108 19:26:35.905454   90261 kubeadm.go:322] Unfortunately, an error has occurred:
	I0108 19:26:35.905486   90261 kubeadm.go:322] 	timed out waiting for the condition
	I0108 19:26:35.905491   90261 kubeadm.go:322] 
	I0108 19:26:35.905524   90261 kubeadm.go:322] This error is likely caused by:
	I0108 19:26:35.905550   90261 kubeadm.go:322] 	- The kubelet is not running
	I0108 19:26:35.905640   90261 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0108 19:26:35.905651   90261 kubeadm.go:322] 
	I0108 19:26:35.905737   90261 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0108 19:26:35.905769   90261 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0108 19:26:35.905804   90261 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0108 19:26:35.905810   90261 kubeadm.go:322] 
	I0108 19:26:35.905894   90261 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0108 19:26:35.905971   90261 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0108 19:26:35.906036   90261 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0108 19:26:35.906080   90261 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0108 19:26:35.906175   90261 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0108 19:26:35.906218   90261 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0108 19:26:35.907437   90261 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0108 19:26:35.907496   90261 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0108 19:26:35.907623   90261 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0108 19:26:35.907698   90261 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 19:26:35.907759   90261 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0108 19:26:35.907843   90261 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0108 19:26:35.907868   90261 kubeadm.go:406] StartCluster complete in 3m54.185327438s
	I0108 19:26:35.907959   90261 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:26:35.927262   90261 logs.go:284] 0 containers: []
	W0108 19:26:35.927276   90261 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:26:35.927358   90261 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:26:35.946472   90261 logs.go:284] 0 containers: []
	W0108 19:26:35.946489   90261 logs.go:286] No container was found matching "etcd"
	I0108 19:26:35.946562   90261 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:26:35.965846   90261 logs.go:284] 0 containers: []
	W0108 19:26:35.965860   90261 logs.go:286] No container was found matching "coredns"
	I0108 19:26:35.965928   90261 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:26:35.982876   90261 logs.go:284] 0 containers: []
	W0108 19:26:35.982891   90261 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:26:35.982959   90261 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:26:36.000792   90261 logs.go:284] 0 containers: []
	W0108 19:26:36.000806   90261 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:26:36.000874   90261 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:26:36.018260   90261 logs.go:284] 0 containers: []
	W0108 19:26:36.018276   90261 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:26:36.018352   90261 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:26:36.035907   90261 logs.go:284] 0 containers: []
	W0108 19:26:36.035921   90261 logs.go:286] No container was found matching "kindnet"
	I0108 19:26:36.035928   90261 logs.go:123] Gathering logs for kubelet ...
	I0108 19:26:36.035935   90261 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:26:36.069951   90261 logs.go:123] Gathering logs for dmesg ...
	I0108 19:26:36.069966   90261 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:26:36.082201   90261 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:26:36.082217   90261 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:26:36.133173   90261 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:26:36.133190   90261 logs.go:123] Gathering logs for Docker ...
	I0108 19:26:36.133206   90261 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:26:36.148420   90261 logs.go:123] Gathering logs for container status ...
	I0108 19:26:36.148435   90261 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0108 19:26:36.196097   90261 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0108 19:26:36.196122   90261 out.go:239] * 
	* 
	W0108 19:26:36.196176   90261 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0108 19:26:36.196205   90261 out.go:239] * 
	* 
	W0108 19:26:36.196810   90261 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 19:26:36.259204   90261 out.go:177] 
	W0108 19:26:36.301452   90261 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0108 19:26:36.301515   90261 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0108 19:26:36.301541   90261 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0108 19:26:36.343480   90261 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-901000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-901000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-901000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c",
	        "Created": "2024-01-09T03:22:27.685275696Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 294952,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-09T03:22:27.950231376Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c/hostname",
	        "HostsPath": "/var/lib/docker/containers/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c/hosts",
	        "LogPath": "/var/lib/docker/containers/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c-json.log",
	        "Name": "/old-k8s-version-901000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-901000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-901000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/511bcc6ca847d5924ffe005728f7ed77c8094a92053674c964ad837af718051a-init/diff:/var/lib/docker/overlay2/60277c56cb2e84cbe47fd8ed3c79b85a017889e24b19778a8fc4b14c01478988/diff",
	                "MergedDir": "/var/lib/docker/overlay2/511bcc6ca847d5924ffe005728f7ed77c8094a92053674c964ad837af718051a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/511bcc6ca847d5924ffe005728f7ed77c8094a92053674c964ad837af718051a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/511bcc6ca847d5924ffe005728f7ed77c8094a92053674c964ad837af718051a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-901000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-901000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-901000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-901000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-901000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "65c44b0e10ad32eff889c1f6bb439c21ce1470f12efaf7b43221a9d5ad40761a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49929"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49930"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49931"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49932"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49928"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/65c44b0e10ad",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-901000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "aa25a1062c36",
	                        "old-k8s-version-901000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "876d1b6b5bcffa0a183a1c34f9924af9d72a7d63d67d2b9f07e88b4f08db4216",
	                    "EndpointID": "c677f4f15eee1cfda0bebd0c72a3f693127afc92899807728b1626f9989feef9",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-901000 -n old-k8s-version-901000
E0108 19:26:36.502191   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/calico-798000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-901000 -n old-k8s-version-901000: exit status 6 (385.312283ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 19:26:36.872500   91042 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-901000" does not appear in /Users/jenkins/minikube-integration/17866-74927/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-901000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (253.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-901000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-901000 create -f testdata/busybox.yaml: exit status 1 (36.457009ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-901000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-901000
E0108 19:26:36.932790   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/bridge-798000/client.crt: no such file or directory
E0108 19:26:36.938171   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/bridge-798000/client.crt: no such file or directory
E0108 19:26:36.949084   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/bridge-798000/client.crt: no such file or directory
E0108 19:26:36.969164   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/bridge-798000/client.crt: no such file or directory
helpers_test.go:235: (dbg) docker inspect old-k8s-version-901000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c",
	        "Created": "2024-01-09T03:22:27.685275696Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 294952,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-09T03:22:27.950231376Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c/hostname",
	        "HostsPath": "/var/lib/docker/containers/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c/hosts",
	        "LogPath": "/var/lib/docker/containers/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c-json.log",
	        "Name": "/old-k8s-version-901000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-901000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-901000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/511bcc6ca847d5924ffe005728f7ed77c8094a92053674c964ad837af718051a-init/diff:/var/lib/docker/overlay2/60277c56cb2e84cbe47fd8ed3c79b85a017889e24b19778a8fc4b14c01478988/diff",
	                "MergedDir": "/var/lib/docker/overlay2/511bcc6ca847d5924ffe005728f7ed77c8094a92053674c964ad837af718051a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/511bcc6ca847d5924ffe005728f7ed77c8094a92053674c964ad837af718051a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/511bcc6ca847d5924ffe005728f7ed77c8094a92053674c964ad837af718051a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-901000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-901000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-901000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-901000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-901000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "65c44b0e10ad32eff889c1f6bb439c21ce1470f12efaf7b43221a9d5ad40761a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49929"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49930"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49931"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49932"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49928"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/65c44b0e10ad",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-901000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "aa25a1062c36",
	                        "old-k8s-version-901000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "876d1b6b5bcffa0a183a1c34f9924af9d72a7d63d67d2b9f07e88b4f08db4216",
	                    "EndpointID": "c677f4f15eee1cfda0bebd0c72a3f693127afc92899807728b1626f9989feef9",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-901000 -n old-k8s-version-901000
E0108 19:26:37.009585   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/bridge-798000/client.crt: no such file or directory
E0108 19:26:37.090406   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/bridge-798000/client.crt: no such file or directory
E0108 19:26:37.250822   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/bridge-798000/client.crt: no such file or directory
E0108 19:26:37.263440   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/false-798000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-901000 -n old-k8s-version-901000: exit status 6 (381.635815ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 19:26:37.342925   91055 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-901000" does not appear in /Users/jenkins/minikube-integration/17866-74927/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-901000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-901000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-901000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c",
	        "Created": "2024-01-09T03:22:27.685275696Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 294952,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-09T03:22:27.950231376Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c/hostname",
	        "HostsPath": "/var/lib/docker/containers/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c/hosts",
	        "LogPath": "/var/lib/docker/containers/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c-json.log",
	        "Name": "/old-k8s-version-901000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-901000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-901000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/511bcc6ca847d5924ffe005728f7ed77c8094a92053674c964ad837af718051a-init/diff:/var/lib/docker/overlay2/60277c56cb2e84cbe47fd8ed3c79b85a017889e24b19778a8fc4b14c01478988/diff",
	                "MergedDir": "/var/lib/docker/overlay2/511bcc6ca847d5924ffe005728f7ed77c8094a92053674c964ad837af718051a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/511bcc6ca847d5924ffe005728f7ed77c8094a92053674c964ad837af718051a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/511bcc6ca847d5924ffe005728f7ed77c8094a92053674c964ad837af718051a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-901000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-901000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-901000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-901000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-901000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "65c44b0e10ad32eff889c1f6bb439c21ce1470f12efaf7b43221a9d5ad40761a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49929"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49930"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49931"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49932"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49928"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/65c44b0e10ad",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-901000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "aa25a1062c36",
	                        "old-k8s-version-901000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "876d1b6b5bcffa0a183a1c34f9924af9d72a7d63d67d2b9f07e88b4f08db4216",
	                    "EndpointID": "c677f4f15eee1cfda0bebd0c72a3f693127afc92899807728b1626f9989feef9",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-901000 -n old-k8s-version-901000
E0108 19:26:37.571140   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/bridge-798000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-901000 -n old-k8s-version-901000: exit status 6 (382.868837ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 19:26:37.780303   91067 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-901000" does not appear in /Users/jenkins/minikube-integration/17866-74927/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-901000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (94.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-901000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0108 19:26:38.211403   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/bridge-798000/client.crt: no such file or directory
E0108 19:26:38.630946   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/enable-default-cni-798000/client.crt: no such file or directory
E0108 19:26:39.492866   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/bridge-798000/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-901000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m34.370546101s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-901000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-901000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-901000 describe deploy/metrics-server -n kube-system: exit status 1 (36.666878ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-901000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-901000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-901000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-901000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c",
	        "Created": "2024-01-09T03:22:27.685275696Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 294952,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-09T03:22:27.950231376Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c/hostname",
	        "HostsPath": "/var/lib/docker/containers/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c/hosts",
	        "LogPath": "/var/lib/docker/containers/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c-json.log",
	        "Name": "/old-k8s-version-901000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-901000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-901000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/511bcc6ca847d5924ffe005728f7ed77c8094a92053674c964ad837af718051a-init/diff:/var/lib/docker/overlay2/60277c56cb2e84cbe47fd8ed3c79b85a017889e24b19778a8fc4b14c01478988/diff",
	                "MergedDir": "/var/lib/docker/overlay2/511bcc6ca847d5924ffe005728f7ed77c8094a92053674c964ad837af718051a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/511bcc6ca847d5924ffe005728f7ed77c8094a92053674c964ad837af718051a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/511bcc6ca847d5924ffe005728f7ed77c8094a92053674c964ad837af718051a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-901000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-901000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-901000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-901000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-901000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "65c44b0e10ad32eff889c1f6bb439c21ce1470f12efaf7b43221a9d5ad40761a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49929"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49930"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49931"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49932"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49928"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/65c44b0e10ad",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-901000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "aa25a1062c36",
	                        "old-k8s-version-901000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "876d1b6b5bcffa0a183a1c34f9924af9d72a7d63d67d2b9f07e88b4f08db4216",
	                    "EndpointID": "c677f4f15eee1cfda0bebd0c72a3f693127afc92899807728b1626f9989feef9",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-901000 -n old-k8s-version-901000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-901000 -n old-k8s-version-901000: exit status 6 (416.04719ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 19:28:12.653538   91180 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-901000" does not appear in /Users/jenkins/minikube-integration/17866-74927/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-901000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (94.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (505.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-901000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0108 19:28:16.986449   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubenet-798000/client.crt: no such file or directory
E0108 19:28:16.991679   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubenet-798000/client.crt: no such file or directory
E0108 19:28:17.003851   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubenet-798000/client.crt: no such file or directory
E0108 19:28:17.026093   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubenet-798000/client.crt: no such file or directory
E0108 19:28:17.066308   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubenet-798000/client.crt: no such file or directory
E0108 19:28:17.147360   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubenet-798000/client.crt: no such file or directory
E0108 19:28:17.308250   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubenet-798000/client.crt: no such file or directory
E0108 19:28:17.628551   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubenet-798000/client.crt: no such file or directory
E0108 19:28:18.268906   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubenet-798000/client.crt: no such file or directory
E0108 19:28:19.549110   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubenet-798000/client.crt: no such file or directory
E0108 19:28:22.109178   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubenet-798000/client.crt: no such file or directory
E0108 19:28:27.229793   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubenet-798000/client.crt: no such file or directory
E0108 19:28:30.676716   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/skaffold-709000/client.crt: no such file or directory
E0108 19:28:37.470006   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubenet-798000/client.crt: no such file or directory
E0108 19:28:52.594465   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/calico-798000/client.crt: no such file or directory
E0108 19:28:57.951122   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubenet-798000/client.crt: no such file or directory
E0108 19:29:01.303202   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/custom-flannel-798000/client.crt: no such file or directory
E0108 19:29:20.339209   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/calico-798000/client.crt: no such file or directory
E0108 19:29:20.772399   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/bridge-798000/client.crt: no such file or directory
E0108 19:29:25.353572   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/flannel-798000/client.crt: no such file or directory
E0108 19:29:28.982389   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/custom-flannel-798000/client.crt: no such file or directory
E0108 19:29:38.910929   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubenet-798000/client.crt: no such file or directory
E0108 19:30:07.187081   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0108 19:30:15.305620   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/false-798000/client.crt: no such file or directory
E0108 19:30:16.698799   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/enable-default-cni-798000/client.crt: no such file or directory
E0108 19:30:24.131950   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0108 19:30:43.019210   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/false-798000/client.crt: no such file or directory
E0108 19:30:44.386602   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/enable-default-cni-798000/client.crt: no such file or directory
E0108 19:30:50.519735   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
E0108 19:31:00.829349   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubenet-798000/client.crt: no such file or directory
E0108 19:31:36.924514   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/bridge-798000/client.crt: no such file or directory
E0108 19:31:41.498088   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/flannel-798000/client.crt: no such file or directory
E0108 19:32:02.924562   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/client.crt: no such file or directory
E0108 19:32:04.609152   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/bridge-798000/client.crt: no such file or directory
E0108 19:32:07.620333   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/skaffold-709000/client.crt: no such file or directory
E0108 19:32:09.191847   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/flannel-798000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-901000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m23.400773406s)

                                                
                                                
-- stdout --
	* [old-k8s-version-901000] minikube v1.32.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17866
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17866-74927/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-901000 in cluster old-k8s-version-901000
	* Pulling base image v0.0.42-1704759386-17866 ...
	* Restarting existing docker container for "old-k8s-version-901000" ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 19:28:14.782096   91210 out.go:296] Setting OutFile to fd 1 ...
	I0108 19:28:14.782409   91210 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 19:28:14.782415   91210 out.go:309] Setting ErrFile to fd 2...
	I0108 19:28:14.782419   91210 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 19:28:14.782616   91210 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17866-74927/.minikube/bin
	I0108 19:28:14.784011   91210 out.go:303] Setting JSON to false
	I0108 19:28:14.806430   91210 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":37666,"bootTime":1704733228,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0108 19:28:14.806520   91210 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0108 19:28:14.828449   91210 out.go:177] * [old-k8s-version-901000] minikube v1.32.0 on Darwin 14.2.1
	I0108 19:28:14.870971   91210 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 19:28:14.871058   91210 notify.go:220] Checking for updates...
	I0108 19:28:14.913852   91210 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17866-74927/kubeconfig
	I0108 19:28:14.934963   91210 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 19:28:14.955908   91210 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 19:28:14.977106   91210 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube
	I0108 19:28:14.997888   91210 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 19:28:15.019779   91210 config.go:182] Loaded profile config "old-k8s-version-901000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0108 19:28:15.041970   91210 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0108 19:28:15.062980   91210 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 19:28:15.119494   91210 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0108 19:28:15.119651   91210 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 19:28:15.220699   91210 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:68 SystemTime:2024-01-09 03:28:15.210697169 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0108 19:28:15.242266   91210 out.go:177] * Using the docker driver based on existing profile
	I0108 19:28:15.262784   91210 start.go:298] selected driver: docker
	I0108 19:28:15.262807   91210 start.go:902] validating driver "docker" against &{Name:old-k8s-version-901000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-901000 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 19:28:15.262907   91210 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 19:28:15.267055   91210 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 19:28:15.369209   91210 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:68 SystemTime:2024-01-09 03:28:15.359481428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0108 19:28:15.369454   91210 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 19:28:15.369524   91210 cni.go:84] Creating CNI manager for ""
	I0108 19:28:15.369536   91210 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0108 19:28:15.369547   91210 start_flags.go:321] config:
	{Name:old-k8s-version-901000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-901000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 19:28:15.411805   91210 out.go:177] * Starting control plane node old-k8s-version-901000 in cluster old-k8s-version-901000
	I0108 19:28:15.432787   91210 cache.go:121] Beginning downloading kic base image for docker with docker
	I0108 19:28:15.454055   91210 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0108 19:28:15.496080   91210 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0108 19:28:15.496136   91210 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0108 19:28:15.496155   91210 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0108 19:28:15.496178   91210 cache.go:56] Caching tarball of preloaded images
	I0108 19:28:15.496406   91210 preload.go:174] Found /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 19:28:15.496425   91210 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0108 19:28:15.496602   91210 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/config.json ...
	I0108 19:28:15.548719   91210 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0108 19:28:15.548736   91210 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0108 19:28:15.548758   91210 cache.go:194] Successfully downloaded all kic artifacts
	I0108 19:28:15.548800   91210 start.go:365] acquiring machines lock for old-k8s-version-901000: {Name:mk41257d6f9820536f749153d111ded94c6d377e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 19:28:15.548890   91210 start.go:369] acquired machines lock for "old-k8s-version-901000" in 68.975µs
	I0108 19:28:15.548912   91210 start.go:96] Skipping create...Using existing machine configuration
	I0108 19:28:15.548921   91210 fix.go:54] fixHost starting: 
	I0108 19:28:15.549162   91210 cli_runner.go:164] Run: docker container inspect old-k8s-version-901000 --format={{.State.Status}}
	I0108 19:28:15.600383   91210 fix.go:102] recreateIfNeeded on old-k8s-version-901000: state=Stopped err=<nil>
	W0108 19:28:15.600430   91210 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 19:28:15.621923   91210 out.go:177] * Restarting existing docker container for "old-k8s-version-901000" ...
	I0108 19:28:15.663591   91210 cli_runner.go:164] Run: docker start old-k8s-version-901000
	I0108 19:28:15.908845   91210 cli_runner.go:164] Run: docker container inspect old-k8s-version-901000 --format={{.State.Status}}
	I0108 19:28:15.964705   91210 kic.go:430] container "old-k8s-version-901000" state is running.
	I0108 19:28:15.965353   91210 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-901000
	I0108 19:28:16.019656   91210 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/config.json ...
	I0108 19:28:16.020095   91210 machine.go:88] provisioning docker machine ...
	I0108 19:28:16.020123   91210 ubuntu.go:169] provisioning hostname "old-k8s-version-901000"
	I0108 19:28:16.020193   91210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-901000
	I0108 19:28:16.083174   91210 main.go:141] libmachine: Using SSH client type: native
	I0108 19:28:16.083522   91210 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 50187 <nil> <nil>}
	I0108 19:28:16.083534   91210 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-901000 && echo "old-k8s-version-901000" | sudo tee /etc/hostname
	I0108 19:28:16.084711   91210 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0108 19:28:19.229280   91210 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-901000
	
	I0108 19:28:19.229375   91210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-901000
	I0108 19:28:19.281973   91210 main.go:141] libmachine: Using SSH client type: native
	I0108 19:28:19.282267   91210 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 50187 <nil> <nil>}
	I0108 19:28:19.282280   91210 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-901000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-901000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-901000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 19:28:19.413942   91210 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 19:28:19.413974   91210 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17866-74927/.minikube CaCertPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17866-74927/.minikube}
	I0108 19:28:19.413995   91210 ubuntu.go:177] setting up certificates
	I0108 19:28:19.414007   91210 provision.go:83] configureAuth start
	I0108 19:28:19.414102   91210 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-901000
	I0108 19:28:19.466154   91210 provision.go:138] copyHostCerts
	I0108 19:28:19.466245   91210 exec_runner.go:144] found /Users/jenkins/minikube-integration/17866-74927/.minikube/cert.pem, removing ...
	I0108 19:28:19.466254   91210 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17866-74927/.minikube/cert.pem
	I0108 19:28:19.466385   91210 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17866-74927/.minikube/cert.pem (1123 bytes)
	I0108 19:28:19.466626   91210 exec_runner.go:144] found /Users/jenkins/minikube-integration/17866-74927/.minikube/key.pem, removing ...
	I0108 19:28:19.466633   91210 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17866-74927/.minikube/key.pem
	I0108 19:28:19.466702   91210 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17866-74927/.minikube/key.pem (1679 bytes)
	I0108 19:28:19.466888   91210 exec_runner.go:144] found /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.pem, removing ...
	I0108 19:28:19.466895   91210 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.pem
	I0108 19:28:19.466964   91210 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.pem (1078 bytes)
	I0108 19:28:19.467110   91210 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-901000 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-901000]
	I0108 19:28:19.561773   91210 provision.go:172] copyRemoteCerts
	I0108 19:28:19.561846   91210 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 19:28:19.561901   91210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-901000
	I0108 19:28:19.612982   91210 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50187 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/old-k8s-version-901000/id_rsa Username:docker}
	I0108 19:28:19.707732   91210 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 19:28:19.728533   91210 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 19:28:19.749580   91210 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 19:28:19.770156   91210 provision.go:86] duration metric: configureAuth took 356.14187ms
	I0108 19:28:19.770169   91210 ubuntu.go:193] setting minikube options for container-runtime
	I0108 19:28:19.770307   91210 config.go:182] Loaded profile config "old-k8s-version-901000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0108 19:28:19.770369   91210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-901000
	I0108 19:28:19.821792   91210 main.go:141] libmachine: Using SSH client type: native
	I0108 19:28:19.822114   91210 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 50187 <nil> <nil>}
	I0108 19:28:19.822125   91210 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 19:28:19.956765   91210 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0108 19:28:19.956780   91210 ubuntu.go:71] root file system type: overlay
	I0108 19:28:19.956867   91210 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 19:28:19.956948   91210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-901000
	I0108 19:28:20.008545   91210 main.go:141] libmachine: Using SSH client type: native
	I0108 19:28:20.008841   91210 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 50187 <nil> <nil>}
	I0108 19:28:20.008894   91210 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 19:28:20.152242   91210 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 19:28:20.152356   91210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-901000
	I0108 19:28:20.204411   91210 main.go:141] libmachine: Using SSH client type: native
	I0108 19:28:20.204744   91210 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 50187 <nil> <nil>}
	I0108 19:28:20.204757   91210 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 19:28:20.343074   91210 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 19:28:20.343095   91210 machine.go:91] provisioned docker machine in 4.323086568s
	I0108 19:28:20.343102   91210 start.go:300] post-start starting for "old-k8s-version-901000" (driver="docker")
	I0108 19:28:20.343110   91210 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 19:28:20.343179   91210 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 19:28:20.343242   91210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-901000
	I0108 19:28:20.394934   91210 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50187 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/old-k8s-version-901000/id_rsa Username:docker}
	I0108 19:28:20.488751   91210 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 19:28:20.492920   91210 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 19:28:20.492939   91210 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 19:28:20.492946   91210 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 19:28:20.492952   91210 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0108 19:28:20.492969   91210 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17866-74927/.minikube/addons for local assets ...
	I0108 19:28:20.493051   91210 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17866-74927/.minikube/files for local assets ...
	I0108 19:28:20.493197   91210 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem -> 753692.pem in /etc/ssl/certs
	I0108 19:28:20.493355   91210 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 19:28:20.501569   91210 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem --> /etc/ssl/certs/753692.pem (1708 bytes)
	I0108 19:28:20.521824   91210 start.go:303] post-start completed in 178.713991ms
	I0108 19:28:20.521922   91210 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 19:28:20.521978   91210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-901000
	I0108 19:28:20.573395   91210 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50187 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/old-k8s-version-901000/id_rsa Username:docker}
	I0108 19:28:20.664372   91210 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 19:28:20.669170   91210 fix.go:56] fixHost completed within 5.120359829s
	I0108 19:28:20.669188   91210 start.go:83] releasing machines lock for "old-k8s-version-901000", held for 5.12040243s
	I0108 19:28:20.669280   91210 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-901000
	I0108 19:28:20.720498   91210 ssh_runner.go:195] Run: cat /version.json
	I0108 19:28:20.720501   91210 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 19:28:20.720589   91210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-901000
	I0108 19:28:20.720610   91210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-901000
	I0108 19:28:20.774512   91210 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50187 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/old-k8s-version-901000/id_rsa Username:docker}
	I0108 19:28:20.774529   91210 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50187 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/old-k8s-version-901000/id_rsa Username:docker}
	I0108 19:28:20.976320   91210 ssh_runner.go:195] Run: systemctl --version
	I0108 19:28:20.981300   91210 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 19:28:20.986088   91210 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 19:28:20.986147   91210 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0108 19:28:20.994596   91210 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0108 19:28:21.002753   91210 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0108 19:28:21.002782   91210 start.go:475] detecting cgroup driver to use...
	I0108 19:28:21.002797   91210 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 19:28:21.002931   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 19:28:21.017470   91210 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0108 19:28:21.027049   91210 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0108 19:28:21.036286   91210 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0108 19:28:21.036348   91210 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0108 19:28:21.045557   91210 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 19:28:21.054987   91210 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0108 19:28:21.064455   91210 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 19:28:21.074077   91210 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 19:28:21.083125   91210 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0108 19:28:21.092478   91210 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 19:28:21.100439   91210 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 19:28:21.108897   91210 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 19:28:21.164863   91210 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 19:28:21.248514   91210 start.go:475] detecting cgroup driver to use...
	I0108 19:28:21.248534   91210 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 19:28:21.248612   91210 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 19:28:21.264286   91210 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0108 19:28:21.264354   91210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 19:28:21.275811   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 19:28:21.292457   91210 ssh_runner.go:195] Run: which cri-dockerd
	I0108 19:28:21.296973   91210 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0108 19:28:21.307116   91210 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0108 19:28:21.324763   91210 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 19:28:21.413306   91210 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 19:28:21.470947   91210 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I0108 19:28:21.471041   91210 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0108 19:28:21.507999   91210 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 19:28:21.566296   91210 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 19:28:21.824462   91210 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 19:28:21.847691   91210 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 19:28:21.916569   91210 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	I0108 19:28:21.916673   91210 cli_runner.go:164] Run: docker exec -t old-k8s-version-901000 dig +short host.docker.internal
	I0108 19:28:22.030891   91210 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0108 19:28:22.030982   91210 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0108 19:28:22.035463   91210 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 19:28:22.045968   91210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-901000
	I0108 19:28:22.098085   91210 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0108 19:28:22.098170   91210 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 19:28:22.117684   91210 docker.go:671] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0108 19:28:22.117695   91210 docker.go:677] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0108 19:28:22.117763   91210 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0108 19:28:22.126387   91210 ssh_runner.go:195] Run: which lz4
	I0108 19:28:22.130482   91210 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0108 19:28:22.134291   91210 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 19:28:22.134318   91210 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0108 19:28:27.213899   91210 docker.go:635] Took 5.083567 seconds to copy over tarball
	I0108 19:28:27.213972   91210 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 19:28:28.736871   91210 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.5229151s)
	I0108 19:28:28.736886   91210 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 19:28:28.774982   91210 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0108 19:28:28.783399   91210 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0108 19:28:28.799186   91210 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 19:28:28.853934   91210 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 19:28:29.232345   91210 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 19:28:29.251582   91210 docker.go:671] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0108 19:28:29.251597   91210 docker.go:677] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0108 19:28:29.251607   91210 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0108 19:28:29.256806   91210 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0108 19:28:29.256982   91210 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0108 19:28:29.256977   91210 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0108 19:28:29.257524   91210 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0108 19:28:29.257931   91210 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0108 19:28:29.258099   91210 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0108 19:28:29.258693   91210 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 19:28:29.258783   91210 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 19:28:29.262660   91210 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0108 19:28:29.264613   91210 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0108 19:28:29.264702   91210 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0108 19:28:29.264896   91210 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 19:28:29.266069   91210 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0108 19:28:29.266080   91210 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0108 19:28:29.266145   91210 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 19:28:29.266260   91210 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0108 19:28:29.703354   91210 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0108 19:28:29.723471   91210 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0108 19:28:29.723518   91210 docker.go:323] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0108 19:28:29.723575   91210 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0108 19:28:29.744265   91210 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0108 19:28:29.750864   91210 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0108 19:28:29.762890   91210 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0108 19:28:29.770960   91210 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0108 19:28:29.771003   91210 docker.go:323] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0108 19:28:29.771095   91210 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0108 19:28:29.785000   91210 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0108 19:28:29.791350   91210 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0108 19:28:29.815802   91210 docker.go:323] Removing image: registry.k8s.io/coredns:1.6.2
	I0108 19:28:29.796766   91210 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0108 19:28:29.815872   91210 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0108 19:28:29.821826   91210 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0108 19:28:29.838473   91210 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0108 19:28:29.838539   91210 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0108 19:28:29.838564   91210 docker.go:323] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0108 19:28:29.838632   91210 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0108 19:28:29.844538   91210 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0108 19:28:29.844574   91210 docker.go:323] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0108 19:28:29.844663   91210 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0108 19:28:29.858235   91210 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0108 19:28:29.864347   91210 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0108 19:28:29.874026   91210 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 19:28:29.893906   91210 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0108 19:28:29.893935   91210 docker.go:323] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 19:28:29.893998   91210 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 19:28:29.911673   91210 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0108 19:28:29.931469   91210 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0108 19:28:29.948976   91210 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0108 19:28:29.949002   91210 docker.go:323] Removing image: registry.k8s.io/pause:3.1
	I0108 19:28:29.949062   91210 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0108 19:28:29.966342   91210 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0108 19:28:30.050391   91210 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 19:28:30.071475   91210 cache_images.go:92] LoadImages completed in 819.871689ms
	W0108 19:28:30.071525   91210 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	I0108 19:28:30.071587   91210 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0108 19:28:30.119966   91210 cni.go:84] Creating CNI manager for ""
	I0108 19:28:30.119982   91210 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0108 19:28:30.120002   91210 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 19:28:30.120022   91210 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-901000 NodeName:old-k8s-version-901000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0108 19:28:30.120142   91210 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-901000"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-901000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.85.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 19:28:30.120200   91210 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-901000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-901000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 19:28:30.120243   91210 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0108 19:28:30.128879   91210 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 19:28:30.128951   91210 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 19:28:30.137526   91210 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0108 19:28:30.153626   91210 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 19:28:30.169667   91210 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0108 19:28:30.185926   91210 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0108 19:28:30.190190   91210 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 19:28:30.200828   91210 certs.go:56] Setting up /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000 for IP: 192.168.85.2
	I0108 19:28:30.200848   91210 certs.go:190] acquiring lock for shared ca certs: {Name:mk44dcbca6ce5cf77b3bf5ce2248b699d6553e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:28:30.201033   91210 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.key
	I0108 19:28:30.201116   91210 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.key
	I0108 19:28:30.201225   91210 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/client.key
	I0108 19:28:30.201307   91210 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/apiserver.key.43b9df8c
	I0108 19:28:30.201376   91210 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/proxy-client.key
	I0108 19:28:30.201596   91210 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/75369.pem (1338 bytes)
	W0108 19:28:30.201675   91210 certs.go:433] ignoring /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/75369_empty.pem, impossibly tiny 0 bytes
	I0108 19:28:30.201687   91210 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 19:28:30.201736   91210 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem (1078 bytes)
	I0108 19:28:30.201787   91210 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem (1123 bytes)
	I0108 19:28:30.201830   91210 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/key.pem (1679 bytes)
	I0108 19:28:30.201918   91210 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem (1708 bytes)
	I0108 19:28:30.202462   91210 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 19:28:30.224215   91210 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 19:28:30.245674   91210 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 19:28:30.266628   91210 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/old-k8s-version-901000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 19:28:30.289106   91210 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 19:28:30.310012   91210 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 19:28:30.332192   91210 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 19:28:30.353161   91210 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 19:28:30.375665   91210 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 19:28:30.398647   91210 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/75369.pem --> /usr/share/ca-certificates/75369.pem (1338 bytes)
	I0108 19:28:30.421315   91210 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem --> /usr/share/ca-certificates/753692.pem (1708 bytes)
	I0108 19:28:30.445173   91210 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 19:28:30.460793   91210 ssh_runner.go:195] Run: openssl version
	I0108 19:28:30.467180   91210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75369.pem && ln -fs /usr/share/ca-certificates/75369.pem /etc/ssl/certs/75369.pem"
	I0108 19:28:30.477981   91210 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75369.pem
	I0108 19:28:30.482982   91210 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  9 02:38 /usr/share/ca-certificates/75369.pem
	I0108 19:28:30.483049   91210 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75369.pem
	I0108 19:28:30.491036   91210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/75369.pem /etc/ssl/certs/51391683.0"
	I0108 19:28:30.501395   91210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/753692.pem && ln -fs /usr/share/ca-certificates/753692.pem /etc/ssl/certs/753692.pem"
	I0108 19:28:30.511251   91210 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/753692.pem
	I0108 19:28:30.516714   91210 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  9 02:38 /usr/share/ca-certificates/753692.pem
	I0108 19:28:30.516789   91210 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/753692.pem
	I0108 19:28:30.525049   91210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/753692.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 19:28:30.536341   91210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 19:28:30.546853   91210 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 19:28:30.551335   91210 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  9 02:33 /usr/share/ca-certificates/minikubeCA.pem
	I0108 19:28:30.551387   91210 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 19:28:30.558756   91210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 19:28:30.569544   91210 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 19:28:30.575229   91210 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 19:28:30.584470   91210 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 19:28:30.591723   91210 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 19:28:30.598340   91210 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 19:28:30.604699   91210 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 19:28:30.611161   91210 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 19:28:30.617996   91210 kubeadm.go:404] StartCluster: {Name:old-k8s-version-901000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-901000 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 19:28:30.618103   91210 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 19:28:30.636173   91210 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 19:28:30.645206   91210 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0108 19:28:30.645229   91210 kubeadm.go:636] restartCluster start
	I0108 19:28:30.645291   91210 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 19:28:30.653867   91210 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:28:30.653969   91210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-901000
	I0108 19:28:30.706016   91210 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-901000" does not appear in /Users/jenkins/minikube-integration/17866-74927/kubeconfig
	I0108 19:28:30.706176   91210 kubeconfig.go:146] "old-k8s-version-901000" context is missing from /Users/jenkins/minikube-integration/17866-74927/kubeconfig - will repair!
	I0108 19:28:30.706489   91210 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/kubeconfig: {Name:mka56893876a255b4247f6735103824515326092 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:28:30.707866   91210 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 19:28:30.716611   91210 api_server.go:166] Checking apiserver status ...
	I0108 19:28:30.716685   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:28:30.725966   91210 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:28:31.216853   91210 api_server.go:166] Checking apiserver status ...
	I0108 19:28:31.217060   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:28:31.228865   91210 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:28:31.716923   91210 api_server.go:166] Checking apiserver status ...
	I0108 19:28:31.717099   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:28:31.728482   91210 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:28:32.217093   91210 api_server.go:166] Checking apiserver status ...
	I0108 19:28:32.217274   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:28:32.228642   91210 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:28:32.717281   91210 api_server.go:166] Checking apiserver status ...
	I0108 19:28:32.717438   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:28:32.728617   91210 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:28:33.217012   91210 api_server.go:166] Checking apiserver status ...
	I0108 19:28:33.217151   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:28:33.228474   91210 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:28:33.717007   91210 api_server.go:166] Checking apiserver status ...
	I0108 19:28:33.717121   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:28:33.728865   91210 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:28:34.217793   91210 api_server.go:166] Checking apiserver status ...
	I0108 19:28:34.217964   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:28:34.229293   91210 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:28:34.717232   91210 api_server.go:166] Checking apiserver status ...
	I0108 19:28:34.717384   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:28:34.728783   91210 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:28:35.218391   91210 api_server.go:166] Checking apiserver status ...
	I0108 19:28:35.218525   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:28:35.230110   91210 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:28:35.716573   91210 api_server.go:166] Checking apiserver status ...
	I0108 19:28:35.716679   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:28:35.728017   91210 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:28:36.217637   91210 api_server.go:166] Checking apiserver status ...
	I0108 19:28:36.217780   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:28:36.229239   91210 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:28:36.717734   91210 api_server.go:166] Checking apiserver status ...
	I0108 19:28:36.717803   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:28:36.727686   91210 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:28:37.217634   91210 api_server.go:166] Checking apiserver status ...
	I0108 19:28:37.217832   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:28:37.229473   91210 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:28:37.717151   91210 api_server.go:166] Checking apiserver status ...
	I0108 19:28:37.717316   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:28:37.728496   91210 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:28:38.217404   91210 api_server.go:166] Checking apiserver status ...
	I0108 19:28:38.217535   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:28:38.229243   91210 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:28:38.717137   91210 api_server.go:166] Checking apiserver status ...
	I0108 19:28:38.717284   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:28:38.728620   91210 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:28:39.217326   91210 api_server.go:166] Checking apiserver status ...
	I0108 19:28:39.217513   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:28:39.229167   91210 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:28:39.716799   91210 api_server.go:166] Checking apiserver status ...
	I0108 19:28:39.716900   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:28:39.728503   91210 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:28:40.216744   91210 api_server.go:166] Checking apiserver status ...
	I0108 19:28:40.216845   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:28:40.228358   91210 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:28:40.716972   91210 api_server.go:166] Checking apiserver status ...
	I0108 19:28:40.717050   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:28:40.727220   91210 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:28:40.727237   91210 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0108 19:28:40.727252   91210 kubeadm.go:1135] stopping kube-system containers ...
	I0108 19:28:40.727337   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 19:28:40.744707   91210 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 19:28:40.756291   91210 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 19:28:40.764907   91210 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5691 Jan  9 03:24 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5731 Jan  9 03:24 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5795 Jan  9 03:24 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5675 Jan  9 03:24 /etc/kubernetes/scheduler.conf
	
	I0108 19:28:40.764967   91210 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0108 19:28:40.773460   91210 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0108 19:28:40.781937   91210 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0108 19:28:40.790198   91210 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0108 19:28:40.798678   91210 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 19:28:40.807211   91210 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 19:28:40.807223   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 19:28:40.858248   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 19:28:41.672331   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 19:28:41.854935   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 19:28:41.920902   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 19:28:41.974674   91210 api_server.go:52] waiting for apiserver process to appear ...
	I0108 19:28:41.974747   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:42.475767   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:42.975503   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:43.474934   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:43.975279   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:44.474822   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:44.974836   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:45.474816   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:45.975277   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:46.474808   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:46.974882   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:47.475165   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:47.974789   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:48.476811   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:48.974721   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:49.474756   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:49.974893   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:50.475122   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:50.974702   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:51.475566   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:51.975137   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:52.475453   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:52.974869   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:53.475119   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:53.974623   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:54.475875   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:54.974625   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:55.475353   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:55.975011   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:56.475286   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:56.975222   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:57.474781   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:57.975942   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:58.474519   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:58.975503   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:59.474666   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:28:59.974490   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:00.475580   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:00.975126   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:01.474383   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:01.974722   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:02.474439   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:02.974374   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:03.475859   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:03.974493   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:04.474481   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:04.974717   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:05.474612   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:05.975030   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:06.475324   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:06.974296   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:07.474563   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:07.974433   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:08.474677   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:08.974267   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:09.474357   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:09.974254   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:10.474440   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:10.975019   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:11.474269   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:11.974406   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:12.474478   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:12.974381   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:13.474500   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:13.974308   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:14.476110   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:14.974619   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:15.475967   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:15.974775   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:16.474484   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:16.974206   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:17.474817   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:17.974882   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:18.474774   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:18.974104   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:19.474324   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:19.974842   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:20.474779   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:20.975154   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:21.474486   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:21.974235   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:22.474403   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:22.974049   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:23.474073   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:23.974806   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:24.474118   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:24.974165   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:25.473978   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:25.975911   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:26.474123   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:26.973856   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:27.474010   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:27.975973   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:28.474166   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:28.973908   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:29.474096   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:29.974301   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:30.473872   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:30.974293   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:31.475904   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:31.973755   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:32.474258   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:32.974234   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:33.474316   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:33.974252   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:34.474134   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:34.973766   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:35.474342   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:35.974192   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:36.473664   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:36.973659   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:37.475733   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:37.975389   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:38.473734   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:38.973813   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:39.474551   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:39.974200   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:40.474013   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:40.974081   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:41.473970   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:41.974482   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:29:41.994546   91210 logs.go:284] 0 containers: []
	W0108 19:29:41.994560   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:29:41.994633   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:29:42.013473   91210 logs.go:284] 0 containers: []
	W0108 19:29:42.013487   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:29:42.013565   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:29:42.031957   91210 logs.go:284] 0 containers: []
	W0108 19:29:42.031972   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:29:42.032046   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:29:42.050907   91210 logs.go:284] 0 containers: []
	W0108 19:29:42.050920   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:29:42.051002   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:29:42.069407   91210 logs.go:284] 0 containers: []
	W0108 19:29:42.069422   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:29:42.069503   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:29:42.087342   91210 logs.go:284] 0 containers: []
	W0108 19:29:42.087356   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:29:42.087427   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:29:42.105741   91210 logs.go:284] 0 containers: []
	W0108 19:29:42.105755   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:29:42.105825   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:29:42.124406   91210 logs.go:284] 0 containers: []
	W0108 19:29:42.124421   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:29:42.124435   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:29:42.124444   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:29:42.158629   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:29:42.158647   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:29:42.171979   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:29:42.171993   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:29:42.225774   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:29:42.225788   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:29:42.225796   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:29:42.240220   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:29:42.240237   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:29:44.796555   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:44.813405   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:29:44.831348   91210 logs.go:284] 0 containers: []
	W0108 19:29:44.831362   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:29:44.831440   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:29:44.849502   91210 logs.go:284] 0 containers: []
	W0108 19:29:44.849517   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:29:44.849595   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:29:44.867612   91210 logs.go:284] 0 containers: []
	W0108 19:29:44.867627   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:29:44.867714   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:29:44.887099   91210 logs.go:284] 0 containers: []
	W0108 19:29:44.887114   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:29:44.887187   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:29:44.906184   91210 logs.go:284] 0 containers: []
	W0108 19:29:44.906200   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:29:44.906273   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:29:44.926385   91210 logs.go:284] 0 containers: []
	W0108 19:29:44.926399   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:29:44.926477   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:29:44.945257   91210 logs.go:284] 0 containers: []
	W0108 19:29:44.945271   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:29:44.945338   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:29:44.964480   91210 logs.go:284] 0 containers: []
	W0108 19:29:44.964494   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:29:44.964501   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:29:44.964508   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:29:45.001125   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:29:45.001142   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:29:45.013831   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:29:45.013846   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:29:45.069004   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:29:45.069026   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:29:45.069040   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:29:45.083483   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:29:45.083498   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:29:47.633280   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:47.645044   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:29:47.663743   91210 logs.go:284] 0 containers: []
	W0108 19:29:47.663757   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:29:47.663824   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:29:47.681840   91210 logs.go:284] 0 containers: []
	W0108 19:29:47.681855   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:29:47.681947   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:29:47.699769   91210 logs.go:284] 0 containers: []
	W0108 19:29:47.699782   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:29:47.699850   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:29:47.718495   91210 logs.go:284] 0 containers: []
	W0108 19:29:47.718509   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:29:47.718583   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:29:47.735756   91210 logs.go:284] 0 containers: []
	W0108 19:29:47.735769   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:29:47.735840   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:29:47.753495   91210 logs.go:284] 0 containers: []
	W0108 19:29:47.753510   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:29:47.753586   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:29:47.771151   91210 logs.go:284] 0 containers: []
	W0108 19:29:47.771164   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:29:47.771241   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:29:47.788185   91210 logs.go:284] 0 containers: []
	W0108 19:29:47.788199   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:29:47.788206   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:29:47.788218   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:29:47.822763   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:29:47.822780   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:29:47.835420   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:29:47.835434   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:29:47.885482   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:29:47.885493   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:29:47.885501   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:29:47.900914   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:29:47.900933   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:29:50.453645   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:50.464919   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:29:50.481801   91210 logs.go:284] 0 containers: []
	W0108 19:29:50.481815   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:29:50.481892   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:29:50.500558   91210 logs.go:284] 0 containers: []
	W0108 19:29:50.500573   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:29:50.500655   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:29:50.517803   91210 logs.go:284] 0 containers: []
	W0108 19:29:50.517817   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:29:50.517912   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:29:50.537289   91210 logs.go:284] 0 containers: []
	W0108 19:29:50.537302   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:29:50.537378   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:29:50.557489   91210 logs.go:284] 0 containers: []
	W0108 19:29:50.557504   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:29:50.557579   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:29:50.575122   91210 logs.go:284] 0 containers: []
	W0108 19:29:50.575135   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:29:50.575203   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:29:50.592494   91210 logs.go:284] 0 containers: []
	W0108 19:29:50.592509   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:29:50.592580   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:29:50.610063   91210 logs.go:284] 0 containers: []
	W0108 19:29:50.610078   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:29:50.610085   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:29:50.610092   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:29:50.664325   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:29:50.664338   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:29:50.664347   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:29:50.678408   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:29:50.678422   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:29:50.728634   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:29:50.728652   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:29:50.764070   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:29:50.764086   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:29:53.276910   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:53.288352   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:29:53.305369   91210 logs.go:284] 0 containers: []
	W0108 19:29:53.305383   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:29:53.305450   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:29:53.323140   91210 logs.go:284] 0 containers: []
	W0108 19:29:53.323152   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:29:53.323220   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:29:53.341295   91210 logs.go:284] 0 containers: []
	W0108 19:29:53.341309   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:29:53.341378   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:29:53.361642   91210 logs.go:284] 0 containers: []
	W0108 19:29:53.361656   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:29:53.361734   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:29:53.389174   91210 logs.go:284] 0 containers: []
	W0108 19:29:53.389188   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:29:53.389273   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:29:53.408215   91210 logs.go:284] 0 containers: []
	W0108 19:29:53.408230   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:29:53.408315   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:29:53.426943   91210 logs.go:284] 0 containers: []
	W0108 19:29:53.426958   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:29:53.427029   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:29:53.450266   91210 logs.go:284] 0 containers: []
	W0108 19:29:53.450283   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:29:53.450293   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:29:53.450301   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:29:53.521118   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:29:53.521135   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:29:53.556857   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:29:53.556875   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:29:53.569826   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:29:53.569847   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:29:53.618429   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:29:53.618445   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:29:53.618462   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:29:56.133483   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:56.145115   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:29:56.162106   91210 logs.go:284] 0 containers: []
	W0108 19:29:56.162120   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:29:56.162200   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:29:56.179582   91210 logs.go:284] 0 containers: []
	W0108 19:29:56.179596   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:29:56.179664   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:29:56.198408   91210 logs.go:284] 0 containers: []
	W0108 19:29:56.198422   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:29:56.198501   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:29:56.216985   91210 logs.go:284] 0 containers: []
	W0108 19:29:56.216999   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:29:56.217066   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:29:56.234494   91210 logs.go:284] 0 containers: []
	W0108 19:29:56.234507   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:29:56.234574   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:29:56.251860   91210 logs.go:284] 0 containers: []
	W0108 19:29:56.251872   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:29:56.251943   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:29:56.269519   91210 logs.go:284] 0 containers: []
	W0108 19:29:56.269531   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:29:56.269598   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:29:56.287192   91210 logs.go:284] 0 containers: []
	W0108 19:29:56.287206   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:29:56.287217   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:29:56.287225   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:29:56.301804   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:29:56.301819   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:29:56.353402   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:29:56.353418   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:29:56.388736   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:29:56.388752   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:29:56.401210   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:29:56.401225   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:29:56.454060   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:29:58.954235   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:29:58.964343   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:29:58.983127   91210 logs.go:284] 0 containers: []
	W0108 19:29:58.983142   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:29:58.983225   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:29:59.001030   91210 logs.go:284] 0 containers: []
	W0108 19:29:59.001044   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:29:59.001110   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:29:59.018692   91210 logs.go:284] 0 containers: []
	W0108 19:29:59.018715   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:29:59.018826   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:29:59.039069   91210 logs.go:284] 0 containers: []
	W0108 19:29:59.039084   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:29:59.039162   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:29:59.056839   91210 logs.go:284] 0 containers: []
	W0108 19:29:59.056852   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:29:59.056918   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:29:59.074595   91210 logs.go:284] 0 containers: []
	W0108 19:29:59.074609   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:29:59.074688   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:29:59.091944   91210 logs.go:284] 0 containers: []
	W0108 19:29:59.091957   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:29:59.092026   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:29:59.110355   91210 logs.go:284] 0 containers: []
	W0108 19:29:59.110369   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:29:59.110378   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:29:59.110386   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:29:59.163742   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:29:59.163757   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:29:59.198530   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:29:59.198544   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:29:59.210964   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:29:59.210979   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:29:59.260391   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:29:59.260410   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:29:59.260418   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:30:01.776241   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:30:01.788116   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:30:01.807041   91210 logs.go:284] 0 containers: []
	W0108 19:30:01.807058   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:30:01.807127   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:30:01.826022   91210 logs.go:284] 0 containers: []
	W0108 19:30:01.826036   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:30:01.826104   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:30:01.844786   91210 logs.go:284] 0 containers: []
	W0108 19:30:01.844804   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:30:01.844859   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:30:01.863296   91210 logs.go:284] 0 containers: []
	W0108 19:30:01.863310   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:30:01.863393   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:30:01.881810   91210 logs.go:284] 0 containers: []
	W0108 19:30:01.881823   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:30:01.881903   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:30:01.901946   91210 logs.go:284] 0 containers: []
	W0108 19:30:01.901960   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:30:01.902038   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:30:01.923667   91210 logs.go:284] 0 containers: []
	W0108 19:30:01.923701   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:30:01.923771   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:30:01.944762   91210 logs.go:284] 0 containers: []
	W0108 19:30:01.944775   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:30:01.944785   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:30:01.944793   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:30:01.979562   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:30:01.979583   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:30:01.992766   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:30:01.992783   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:30:02.046481   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:30:02.046494   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:30:02.046503   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:30:02.060873   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:30:02.060909   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:30:04.615688   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:30:04.627390   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:30:04.644846   91210 logs.go:284] 0 containers: []
	W0108 19:30:04.644860   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:30:04.644927   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:30:04.663742   91210 logs.go:284] 0 containers: []
	W0108 19:30:04.663757   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:30:04.663833   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:30:04.683045   91210 logs.go:284] 0 containers: []
	W0108 19:30:04.683060   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:30:04.683134   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:30:04.701866   91210 logs.go:284] 0 containers: []
	W0108 19:30:04.701884   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:30:04.701953   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:30:04.720391   91210 logs.go:284] 0 containers: []
	W0108 19:30:04.720406   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:30:04.720479   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:30:04.738352   91210 logs.go:284] 0 containers: []
	W0108 19:30:04.738366   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:30:04.738438   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:30:04.756656   91210 logs.go:284] 0 containers: []
	W0108 19:30:04.756671   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:30:04.756738   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:30:04.774119   91210 logs.go:284] 0 containers: []
	W0108 19:30:04.774133   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:30:04.774141   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:30:04.774147   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:30:04.808712   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:30:04.808726   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:30:04.821099   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:30:04.821114   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:30:04.881703   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:30:04.881721   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:30:04.881729   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:30:04.896532   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:30:04.896553   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:30:07.508773   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:30:07.519864   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:30:07.536426   91210 logs.go:284] 0 containers: []
	W0108 19:30:07.536439   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:30:07.536525   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:30:07.554683   91210 logs.go:284] 0 containers: []
	W0108 19:30:07.554700   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:30:07.554773   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:30:07.574796   91210 logs.go:284] 0 containers: []
	W0108 19:30:07.574812   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:30:07.574884   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:30:07.594174   91210 logs.go:284] 0 containers: []
	W0108 19:30:07.594188   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:30:07.594263   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:30:07.611336   91210 logs.go:284] 0 containers: []
	W0108 19:30:07.611350   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:30:07.611418   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:30:07.629335   91210 logs.go:284] 0 containers: []
	W0108 19:30:07.629349   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:30:07.629420   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:30:07.647622   91210 logs.go:284] 0 containers: []
	W0108 19:30:07.647636   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:30:07.647703   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:30:07.665664   91210 logs.go:284] 0 containers: []
	W0108 19:30:07.665678   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:30:07.665694   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:30:07.665702   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:30:07.680167   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:30:07.680185   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:30:07.729827   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:30:07.729842   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:30:07.764277   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:30:07.764293   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:30:07.776649   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:30:07.776666   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:30:07.830403   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:30:10.330890   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:30:10.342473   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:30:10.360501   91210 logs.go:284] 0 containers: []
	W0108 19:30:10.360515   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:30:10.360588   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:30:10.377580   91210 logs.go:284] 0 containers: []
	W0108 19:30:10.377595   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:30:10.377661   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:30:10.396802   91210 logs.go:284] 0 containers: []
	W0108 19:30:10.396815   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:30:10.396890   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:30:10.417417   91210 logs.go:284] 0 containers: []
	W0108 19:30:10.417435   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:30:10.417530   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:30:10.438475   91210 logs.go:284] 0 containers: []
	W0108 19:30:10.438491   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:30:10.438561   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:30:10.457053   91210 logs.go:284] 0 containers: []
	W0108 19:30:10.457069   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:30:10.457155   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:30:10.476140   91210 logs.go:284] 0 containers: []
	W0108 19:30:10.476153   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:30:10.476228   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:30:10.503478   91210 logs.go:284] 0 containers: []
	W0108 19:30:10.503495   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:30:10.503507   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:30:10.503518   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:30:10.539500   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:30:10.539528   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:30:10.552438   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:30:10.552460   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:30:10.617663   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:30:10.617676   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:30:10.617689   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:30:10.632359   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:30:10.632377   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:30:13.181149   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:30:13.192804   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:30:13.211751   91210 logs.go:284] 0 containers: []
	W0108 19:30:13.211765   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:30:13.211839   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:30:13.229717   91210 logs.go:284] 0 containers: []
	W0108 19:30:13.229731   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:30:13.229807   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:30:13.248308   91210 logs.go:284] 0 containers: []
	W0108 19:30:13.248323   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:30:13.248406   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:30:13.266957   91210 logs.go:284] 0 containers: []
	W0108 19:30:13.266971   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:30:13.267044   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:30:13.285970   91210 logs.go:284] 0 containers: []
	W0108 19:30:13.285984   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:30:13.286053   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:30:13.305832   91210 logs.go:284] 0 containers: []
	W0108 19:30:13.305846   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:30:13.305914   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:30:13.324777   91210 logs.go:284] 0 containers: []
	W0108 19:30:13.324791   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:30:13.324863   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:30:13.343036   91210 logs.go:284] 0 containers: []
	W0108 19:30:13.343052   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:30:13.343059   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:30:13.343066   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:30:13.355549   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:30:13.355566   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:30:13.408298   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:30:13.408318   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:30:13.408326   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:30:13.423141   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:30:13.423159   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:30:13.477092   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:30:13.477107   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:30:16.013642   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:30:16.024357   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:30:16.042980   91210 logs.go:284] 0 containers: []
	W0108 19:30:16.042995   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:30:16.043064   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:30:16.061462   91210 logs.go:284] 0 containers: []
	W0108 19:30:16.061474   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:30:16.061554   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:30:16.079649   91210 logs.go:284] 0 containers: []
	W0108 19:30:16.079663   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:30:16.079733   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:30:16.096746   91210 logs.go:284] 0 containers: []
	W0108 19:30:16.096760   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:30:16.096828   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:30:16.115293   91210 logs.go:284] 0 containers: []
	W0108 19:30:16.115311   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:30:16.115412   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:30:16.133641   91210 logs.go:284] 0 containers: []
	W0108 19:30:16.133659   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:30:16.133733   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:30:16.151742   91210 logs.go:284] 0 containers: []
	W0108 19:30:16.151756   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:30:16.151827   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:30:16.168764   91210 logs.go:284] 0 containers: []
	W0108 19:30:16.168777   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:30:16.168784   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:30:16.168793   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:30:16.203084   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:30:16.203097   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:30:16.215537   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:30:16.215552   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:30:16.267540   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:30:16.267552   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:30:16.267560   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:30:16.282058   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:30:16.282072   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:30:18.836696   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:30:18.848182   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:30:18.867640   91210 logs.go:284] 0 containers: []
	W0108 19:30:18.867653   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:30:18.867729   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:30:18.886100   91210 logs.go:284] 0 containers: []
	W0108 19:30:18.886114   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:30:18.886182   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:30:18.903311   91210 logs.go:284] 0 containers: []
	W0108 19:30:18.903325   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:30:18.903399   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:30:18.923704   91210 logs.go:284] 0 containers: []
	W0108 19:30:18.923720   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:30:18.923793   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:30:18.942399   91210 logs.go:284] 0 containers: []
	W0108 19:30:18.942414   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:30:18.942484   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:30:18.960481   91210 logs.go:284] 0 containers: []
	W0108 19:30:18.960495   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:30:18.960565   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:30:18.979940   91210 logs.go:284] 0 containers: []
	W0108 19:30:18.979954   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:30:18.980028   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:30:18.998143   91210 logs.go:284] 0 containers: []
	W0108 19:30:18.998157   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:30:18.998164   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:30:18.998171   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:30:19.033332   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:30:19.033347   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:30:19.045823   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:30:19.045836   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:30:19.125470   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:30:19.125485   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:30:19.125494   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:30:19.140331   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:30:19.140347   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:30:21.722879   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:30:21.733136   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:30:21.750721   91210 logs.go:284] 0 containers: []
	W0108 19:30:21.750737   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:30:21.750811   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:30:21.769043   91210 logs.go:284] 0 containers: []
	W0108 19:30:21.769059   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:30:21.769128   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:30:21.787328   91210 logs.go:284] 0 containers: []
	W0108 19:30:21.787343   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:30:21.787410   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:30:21.804513   91210 logs.go:284] 0 containers: []
	W0108 19:30:21.804527   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:30:21.804597   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:30:21.823038   91210 logs.go:284] 0 containers: []
	W0108 19:30:21.823051   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:30:21.823122   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:30:21.841100   91210 logs.go:284] 0 containers: []
	W0108 19:30:21.841114   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:30:21.841195   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:30:21.858749   91210 logs.go:284] 0 containers: []
	W0108 19:30:21.858762   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:30:21.858834   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:30:21.876828   91210 logs.go:284] 0 containers: []
	W0108 19:30:21.876841   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:30:21.876848   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:30:21.876856   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:30:21.891612   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:30:21.891627   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:30:21.945968   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:30:21.945982   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:30:21.980658   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:30:21.980677   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:30:21.993241   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:30:21.993255   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:30:22.047021   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:30:24.547171   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:30:24.558872   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:30:24.576712   91210 logs.go:284] 0 containers: []
	W0108 19:30:24.576726   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:30:24.576794   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:30:24.594730   91210 logs.go:284] 0 containers: []
	W0108 19:30:24.594744   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:30:24.594813   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:30:24.612375   91210 logs.go:284] 0 containers: []
	W0108 19:30:24.612393   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:30:24.612480   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:30:24.632088   91210 logs.go:284] 0 containers: []
	W0108 19:30:24.632104   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:30:24.632198   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:30:24.650724   91210 logs.go:284] 0 containers: []
	W0108 19:30:24.650739   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:30:24.650819   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:30:24.671013   91210 logs.go:284] 0 containers: []
	W0108 19:30:24.671027   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:30:24.671094   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:30:24.710203   91210 logs.go:284] 0 containers: []
	W0108 19:30:24.710217   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:30:24.710285   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:30:24.728025   91210 logs.go:284] 0 containers: []
	W0108 19:30:24.728040   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:30:24.728047   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:30:24.728054   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:30:24.742693   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:30:24.742709   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:30:24.792271   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:30:24.800897   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:30:24.835388   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:30:24.835402   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:30:24.847613   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:30:24.847632   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:30:24.900118   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:30:27.400737   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:30:27.412236   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:30:27.430298   91210 logs.go:284] 0 containers: []
	W0108 19:30:27.430315   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:30:27.430393   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:30:27.447409   91210 logs.go:284] 0 containers: []
	W0108 19:30:27.447425   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:30:27.447492   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:30:27.464484   91210 logs.go:284] 0 containers: []
	W0108 19:30:27.464496   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:30:27.464570   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:30:27.482206   91210 logs.go:284] 0 containers: []
	W0108 19:30:27.482220   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:30:27.482290   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:30:27.501061   91210 logs.go:284] 0 containers: []
	W0108 19:30:27.501081   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:30:27.501148   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:30:27.519705   91210 logs.go:284] 0 containers: []
	W0108 19:30:27.519718   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:30:27.519785   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:30:27.537424   91210 logs.go:284] 0 containers: []
	W0108 19:30:27.537437   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:30:27.537507   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:30:27.556901   91210 logs.go:284] 0 containers: []
	W0108 19:30:27.556915   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:30:27.556922   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:30:27.556929   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:30:27.592038   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:30:27.592052   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:30:27.604773   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:30:27.604788   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:30:27.660072   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:30:27.660090   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:30:27.660105   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:30:27.675261   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:30:27.675283   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:30:30.232632   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:30:30.243865   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:30:30.262868   91210 logs.go:284] 0 containers: []
	W0108 19:30:30.262902   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:30:30.262968   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:30:30.281184   91210 logs.go:284] 0 containers: []
	W0108 19:30:30.281198   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:30:30.281269   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:30:30.299457   91210 logs.go:284] 0 containers: []
	W0108 19:30:30.299471   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:30:30.299541   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:30:30.317363   91210 logs.go:284] 0 containers: []
	W0108 19:30:30.317380   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:30:30.317480   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:30:30.334967   91210 logs.go:284] 0 containers: []
	W0108 19:30:30.334988   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:30:30.335076   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:30:30.353423   91210 logs.go:284] 0 containers: []
	W0108 19:30:30.353437   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:30:30.353509   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:30:30.371420   91210 logs.go:284] 0 containers: []
	W0108 19:30:30.371435   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:30:30.371506   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:30:30.389893   91210 logs.go:284] 0 containers: []
	W0108 19:30:30.389907   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:30:30.389915   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:30:30.389922   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:30:30.424194   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:30:30.424209   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:30:30.436586   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:30:30.436603   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:30:30.493438   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:30:30.493450   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:30:30.493461   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:30:30.507748   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:30:30.507762   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:30:33.062489   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:30:33.072416   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:30:33.090568   91210 logs.go:284] 0 containers: []
	W0108 19:30:33.090583   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:30:33.090655   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:30:33.107817   91210 logs.go:284] 0 containers: []
	W0108 19:30:33.107832   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:30:33.107906   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:30:33.125678   91210 logs.go:284] 0 containers: []
	W0108 19:30:33.125693   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:30:33.125761   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:30:33.144297   91210 logs.go:284] 0 containers: []
	W0108 19:30:33.144311   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:30:33.144381   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:30:33.161559   91210 logs.go:284] 0 containers: []
	W0108 19:30:33.161573   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:30:33.161644   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:30:33.179227   91210 logs.go:284] 0 containers: []
	W0108 19:30:33.179241   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:30:33.179313   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:30:33.198366   91210 logs.go:284] 0 containers: []
	W0108 19:30:33.198379   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:30:33.198447   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:30:33.217009   91210 logs.go:284] 0 containers: []
	W0108 19:30:33.217023   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:30:33.217031   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:30:33.217042   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:30:33.252276   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:30:33.252292   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:30:33.264622   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:30:33.264636   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:30:33.327821   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:30:33.327834   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:30:33.327843   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:30:33.342473   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:30:33.342488   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:30:35.890053   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:30:35.900758   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:30:35.923529   91210 logs.go:284] 0 containers: []
	W0108 19:30:35.923542   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:30:35.923614   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:30:35.944260   91210 logs.go:284] 0 containers: []
	W0108 19:30:35.944276   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:30:35.944352   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:30:36.011191   91210 logs.go:284] 0 containers: []
	W0108 19:30:36.011206   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:30:36.011276   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:30:36.030164   91210 logs.go:284] 0 containers: []
	W0108 19:30:36.030177   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:30:36.030246   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:30:36.047992   91210 logs.go:284] 0 containers: []
	W0108 19:30:36.048005   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:30:36.048073   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:30:36.065543   91210 logs.go:284] 0 containers: []
	W0108 19:30:36.065557   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:30:36.065624   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:30:36.082412   91210 logs.go:284] 0 containers: []
	W0108 19:30:36.082425   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:30:36.082495   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:30:36.100063   91210 logs.go:284] 0 containers: []
	W0108 19:30:36.100078   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:30:36.100085   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:30:36.100093   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:30:36.136518   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:30:36.136533   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:30:36.149243   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:30:36.149264   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:30:36.204301   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:30:36.204313   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:30:36.204321   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:30:36.218844   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:30:36.218859   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:30:38.766302   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:30:38.778181   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:30:38.795761   91210 logs.go:284] 0 containers: []
	W0108 19:30:38.795774   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:30:38.795865   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:30:38.812803   91210 logs.go:284] 0 containers: []
	W0108 19:30:38.812830   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:30:38.812913   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:30:38.832599   91210 logs.go:284] 0 containers: []
	W0108 19:30:38.832613   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:30:38.832682   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:30:38.850189   91210 logs.go:284] 0 containers: []
	W0108 19:30:38.850203   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:30:38.850274   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:30:38.868935   91210 logs.go:284] 0 containers: []
	W0108 19:30:38.868955   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:30:38.869029   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:30:38.886432   91210 logs.go:284] 0 containers: []
	W0108 19:30:38.886445   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:30:38.886515   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:30:38.906014   91210 logs.go:284] 0 containers: []
	W0108 19:30:38.906027   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:30:38.906092   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:30:38.924351   91210 logs.go:284] 0 containers: []
	W0108 19:30:38.924367   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:30:38.924374   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:30:38.924386   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:30:38.937207   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:30:38.937224   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:30:39.019297   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:30:39.019309   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:30:39.019317   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:30:39.033542   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:30:39.033560   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:30:39.088748   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:30:39.088763   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:30:41.624963   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:30:41.636202   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:30:41.654672   91210 logs.go:284] 0 containers: []
	W0108 19:30:41.654686   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:30:41.654764   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:30:41.672394   91210 logs.go:284] 0 containers: []
	W0108 19:30:41.672408   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:30:41.672481   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:30:41.689449   91210 logs.go:284] 0 containers: []
	W0108 19:30:41.689462   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:30:41.689532   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:30:41.706575   91210 logs.go:284] 0 containers: []
	W0108 19:30:41.706590   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:30:41.706665   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:30:41.723860   91210 logs.go:284] 0 containers: []
	W0108 19:30:41.723873   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:30:41.723942   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:30:41.741886   91210 logs.go:284] 0 containers: []
	W0108 19:30:41.741900   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:30:41.741970   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:30:41.760550   91210 logs.go:284] 0 containers: []
	W0108 19:30:41.760565   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:30:41.760640   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:30:41.778609   91210 logs.go:284] 0 containers: []
	W0108 19:30:41.778623   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:30:41.778631   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:30:41.778641   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:30:41.813325   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:30:41.813338   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:30:41.825657   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:30:41.825672   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:30:41.878220   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:30:41.878232   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:30:41.878242   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:30:41.893641   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:30:41.893656   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:30:44.444271   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:30:44.454479   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:30:44.471763   91210 logs.go:284] 0 containers: []
	W0108 19:30:44.471781   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:30:44.471850   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:30:44.489665   91210 logs.go:284] 0 containers: []
	W0108 19:30:44.489684   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:30:44.489760   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:30:44.506875   91210 logs.go:284] 0 containers: []
	W0108 19:30:44.506892   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:30:44.506956   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:30:44.524964   91210 logs.go:284] 0 containers: []
	W0108 19:30:44.524978   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:30:44.525051   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:30:44.544453   91210 logs.go:284] 0 containers: []
	W0108 19:30:44.544468   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:30:44.544538   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:30:44.560763   91210 logs.go:284] 0 containers: []
	W0108 19:30:44.560780   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:30:44.560846   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:30:44.579688   91210 logs.go:284] 0 containers: []
	W0108 19:30:44.579711   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:30:44.579800   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:30:44.597993   91210 logs.go:284] 0 containers: []
	W0108 19:30:44.598008   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:30:44.598016   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:30:44.598027   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:30:44.612395   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:30:44.612409   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:30:44.666610   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:30:44.666626   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:30:44.704672   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:30:44.704693   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:30:44.717527   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:30:44.717542   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:30:44.771934   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:30:47.272917   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:30:47.284440   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:30:47.301782   91210 logs.go:284] 0 containers: []
	W0108 19:30:47.301797   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:30:47.301869   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:30:47.320186   91210 logs.go:284] 0 containers: []
	W0108 19:30:47.320199   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:30:47.320265   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:30:47.338023   91210 logs.go:284] 0 containers: []
	W0108 19:30:47.338037   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:30:47.338104   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:30:47.357330   91210 logs.go:284] 0 containers: []
	W0108 19:30:47.357343   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:30:47.357417   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:30:47.375513   91210 logs.go:284] 0 containers: []
	W0108 19:30:47.375527   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:30:47.375605   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:30:47.393750   91210 logs.go:284] 0 containers: []
	W0108 19:30:47.393764   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:30:47.393840   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:30:47.411678   91210 logs.go:284] 0 containers: []
	W0108 19:30:47.411697   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:30:47.411777   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:30:47.429701   91210 logs.go:284] 0 containers: []
	W0108 19:30:47.429715   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:30:47.429723   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:30:47.429731   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:30:47.476690   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:30:47.476706   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:30:47.511192   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:30:47.511207   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:30:47.523830   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:30:47.523847   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:30:47.578340   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:30:47.578360   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:30:47.578368   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:30:50.093365   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:30:50.104663   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:30:50.122759   91210 logs.go:284] 0 containers: []
	W0108 19:30:50.122772   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:30:50.122837   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:30:50.141034   91210 logs.go:284] 0 containers: []
	W0108 19:30:50.141049   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:30:50.141115   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:30:50.158636   91210 logs.go:284] 0 containers: []
	W0108 19:30:50.158651   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:30:50.158721   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:30:50.178237   91210 logs.go:284] 0 containers: []
	W0108 19:30:50.178250   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:30:50.178326   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:30:50.195652   91210 logs.go:284] 0 containers: []
	W0108 19:30:50.195666   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:30:50.195741   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:30:50.213653   91210 logs.go:284] 0 containers: []
	W0108 19:30:50.213667   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:30:50.213736   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:30:50.231939   91210 logs.go:284] 0 containers: []
	W0108 19:30:50.231954   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:30:50.232023   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:30:50.250020   91210 logs.go:284] 0 containers: []
	W0108 19:30:50.250033   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:30:50.250041   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:30:50.250048   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:30:50.262437   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:30:50.262458   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:30:50.315432   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:30:50.315445   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:30:50.315456   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:30:50.329568   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:30:50.329582   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:30:50.378481   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:30:50.378497   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:30:52.914372   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:30:52.925501   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:30:52.944216   91210 logs.go:284] 0 containers: []
	W0108 19:30:52.944229   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:30:52.944295   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:30:52.961628   91210 logs.go:284] 0 containers: []
	W0108 19:30:52.961641   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:30:52.961709   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:30:52.978797   91210 logs.go:284] 0 containers: []
	W0108 19:30:52.978811   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:30:52.978875   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:30:52.997483   91210 logs.go:284] 0 containers: []
	W0108 19:30:52.997529   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:30:52.997648   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:30:53.015922   91210 logs.go:284] 0 containers: []
	W0108 19:30:53.015936   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:30:53.016009   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:30:53.034163   91210 logs.go:284] 0 containers: []
	W0108 19:30:53.034176   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:30:53.034246   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:30:53.053258   91210 logs.go:284] 0 containers: []
	W0108 19:30:53.053273   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:30:53.053342   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:30:53.071978   91210 logs.go:284] 0 containers: []
	W0108 19:30:53.071992   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:30:53.071999   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:30:53.072015   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:30:53.084466   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:30:53.084479   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:30:53.145730   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:30:53.145743   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:30:53.145751   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:30:53.160612   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:30:53.160626   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:30:53.215684   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:30:53.215699   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:30:55.750736   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:30:55.762164   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:30:55.780156   91210 logs.go:284] 0 containers: []
	W0108 19:30:55.780171   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:30:55.780245   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:30:55.797695   91210 logs.go:284] 0 containers: []
	W0108 19:30:55.797708   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:30:55.797773   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:30:55.816539   91210 logs.go:284] 0 containers: []
	W0108 19:30:55.816554   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:30:55.816628   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:30:55.833623   91210 logs.go:284] 0 containers: []
	W0108 19:30:55.833638   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:30:55.833716   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:30:55.851526   91210 logs.go:284] 0 containers: []
	W0108 19:30:55.851540   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:30:55.851608   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:30:55.868795   91210 logs.go:284] 0 containers: []
	W0108 19:30:55.868811   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:30:55.868884   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:30:55.886935   91210 logs.go:284] 0 containers: []
	W0108 19:30:55.886949   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:30:55.887037   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:30:55.906789   91210 logs.go:284] 0 containers: []
	W0108 19:30:55.906804   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:30:55.906813   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:30:55.906831   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:30:55.921106   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:30:55.921121   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:30:55.975928   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:30:55.975941   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:30:55.975954   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:30:56.000887   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:30:56.000906   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:30:56.049871   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:30:56.049886   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:30:58.585436   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:30:58.597063   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:30:58.615876   91210 logs.go:284] 0 containers: []
	W0108 19:30:58.615890   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:30:58.615964   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:30:58.633718   91210 logs.go:284] 0 containers: []
	W0108 19:30:58.633733   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:30:58.633810   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:30:58.653781   91210 logs.go:284] 0 containers: []
	W0108 19:30:58.653795   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:30:58.653859   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:30:58.671161   91210 logs.go:284] 0 containers: []
	W0108 19:30:58.671175   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:30:58.671248   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:30:58.688281   91210 logs.go:284] 0 containers: []
	W0108 19:30:58.688295   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:30:58.688363   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:30:58.707546   91210 logs.go:284] 0 containers: []
	W0108 19:30:58.707559   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:30:58.707628   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:30:58.725961   91210 logs.go:284] 0 containers: []
	W0108 19:30:58.725975   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:30:58.726048   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:30:58.743782   91210 logs.go:284] 0 containers: []
	W0108 19:30:58.743796   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:30:58.743804   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:30:58.743810   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:30:58.778007   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:30:58.778025   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:30:58.790838   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:30:58.790861   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:30:58.839435   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:30:58.839462   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:30:58.839476   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:30:58.853994   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:30:58.854010   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:31:01.407825   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:31:01.419217   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:31:01.437875   91210 logs.go:284] 0 containers: []
	W0108 19:31:01.437893   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:31:01.437969   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:31:01.455238   91210 logs.go:284] 0 containers: []
	W0108 19:31:01.455252   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:31:01.455338   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:31:01.472775   91210 logs.go:284] 0 containers: []
	W0108 19:31:01.472789   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:31:01.472862   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:31:01.490769   91210 logs.go:284] 0 containers: []
	W0108 19:31:01.490783   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:31:01.490851   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:31:01.509334   91210 logs.go:284] 0 containers: []
	W0108 19:31:01.509356   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:31:01.509428   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:31:01.526376   91210 logs.go:284] 0 containers: []
	W0108 19:31:01.526397   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:31:01.526465   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:31:01.545433   91210 logs.go:284] 0 containers: []
	W0108 19:31:01.545446   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:31:01.545511   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:31:01.563639   91210 logs.go:284] 0 containers: []
	W0108 19:31:01.563654   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:31:01.563662   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:31:01.563674   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:31:01.599741   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:31:01.599761   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:31:01.613053   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:31:01.613069   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:31:01.662551   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:31:01.662564   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:31:01.662572   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:31:01.677268   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:31:01.677285   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:31:04.225493   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:31:04.236608   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:31:04.255040   91210 logs.go:284] 0 containers: []
	W0108 19:31:04.255054   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:31:04.255128   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:31:04.272476   91210 logs.go:284] 0 containers: []
	W0108 19:31:04.272490   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:31:04.272559   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:31:04.289753   91210 logs.go:284] 0 containers: []
	W0108 19:31:04.289767   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:31:04.289846   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:31:04.308400   91210 logs.go:284] 0 containers: []
	W0108 19:31:04.308415   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:31:04.308486   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:31:04.327003   91210 logs.go:284] 0 containers: []
	W0108 19:31:04.327018   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:31:04.327084   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:31:04.346214   91210 logs.go:284] 0 containers: []
	W0108 19:31:04.346228   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:31:04.346322   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:31:04.364771   91210 logs.go:284] 0 containers: []
	W0108 19:31:04.364785   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:31:04.364862   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:31:04.383111   91210 logs.go:284] 0 containers: []
	W0108 19:31:04.383126   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:31:04.383133   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:31:04.383139   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:31:04.421097   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:31:04.421113   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:31:04.434592   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:31:04.434621   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:31:04.500400   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:31:04.500414   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:31:04.500435   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:31:04.514703   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:31:04.514717   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:31:07.065735   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:31:07.076647   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:31:07.094600   91210 logs.go:284] 0 containers: []
	W0108 19:31:07.094612   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:31:07.094680   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:31:07.114314   91210 logs.go:284] 0 containers: []
	W0108 19:31:07.114330   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:31:07.114406   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:31:07.133368   91210 logs.go:284] 0 containers: []
	W0108 19:31:07.133383   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:31:07.133462   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:31:07.151541   91210 logs.go:284] 0 containers: []
	W0108 19:31:07.151555   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:31:07.151631   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:31:07.170648   91210 logs.go:284] 0 containers: []
	W0108 19:31:07.170662   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:31:07.170729   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:31:07.189691   91210 logs.go:284] 0 containers: []
	W0108 19:31:07.189705   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:31:07.189770   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:31:07.207922   91210 logs.go:284] 0 containers: []
	W0108 19:31:07.207937   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:31:07.208005   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:31:07.226843   91210 logs.go:284] 0 containers: []
	W0108 19:31:07.226858   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:31:07.226865   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:31:07.226873   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:31:07.285121   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:31:07.285133   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:31:07.285147   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:31:07.299634   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:31:07.299647   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:31:07.354820   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:31:07.354834   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:31:07.389682   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:31:07.389716   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:31:09.903597   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:31:09.915836   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:31:09.934942   91210 logs.go:284] 0 containers: []
	W0108 19:31:09.934965   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:31:09.935034   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:31:09.953243   91210 logs.go:284] 0 containers: []
	W0108 19:31:09.953257   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:31:09.953331   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:31:10.003025   91210 logs.go:284] 0 containers: []
	W0108 19:31:10.003046   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:31:10.003109   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:31:10.019966   91210 logs.go:284] 0 containers: []
	W0108 19:31:10.019979   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:31:10.020032   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:31:10.039511   91210 logs.go:284] 0 containers: []
	W0108 19:31:10.039526   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:31:10.039601   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:31:10.056823   91210 logs.go:284] 0 containers: []
	W0108 19:31:10.056837   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:31:10.056908   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:31:10.075016   91210 logs.go:284] 0 containers: []
	W0108 19:31:10.075030   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:31:10.075096   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:31:10.093531   91210 logs.go:284] 0 containers: []
	W0108 19:31:10.093545   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:31:10.093552   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:31:10.093559   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:31:10.143373   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:31:10.143388   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:31:10.179034   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:31:10.179049   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:31:10.191555   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:31:10.191570   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:31:10.244133   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:31:10.244153   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:31:10.244169   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:31:12.759297   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:31:12.769806   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:31:12.788028   91210 logs.go:284] 0 containers: []
	W0108 19:31:12.788043   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:31:12.788130   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:31:12.806743   91210 logs.go:284] 0 containers: []
	W0108 19:31:12.806755   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:31:12.806820   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:31:12.825361   91210 logs.go:284] 0 containers: []
	W0108 19:31:12.825375   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:31:12.825445   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:31:12.844086   91210 logs.go:284] 0 containers: []
	W0108 19:31:12.844105   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:31:12.844182   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:31:12.862824   91210 logs.go:284] 0 containers: []
	W0108 19:31:12.862839   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:31:12.862911   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:31:12.881869   91210 logs.go:284] 0 containers: []
	W0108 19:31:12.881884   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:31:12.881960   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:31:12.902017   91210 logs.go:284] 0 containers: []
	W0108 19:31:12.902031   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:31:12.902121   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:31:12.923117   91210 logs.go:284] 0 containers: []
	W0108 19:31:12.923134   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:31:12.923143   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:31:12.923153   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:31:12.958886   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:31:12.958903   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:31:12.972017   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:31:12.972034   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:31:13.052137   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:31:13.052154   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:31:13.052168   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:31:13.066695   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:31:13.066710   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:31:15.621917   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:31:15.633152   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:31:15.651656   91210 logs.go:284] 0 containers: []
	W0108 19:31:15.651670   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:31:15.651747   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:31:15.668773   91210 logs.go:284] 0 containers: []
	W0108 19:31:15.668786   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:31:15.668854   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:31:15.686766   91210 logs.go:284] 0 containers: []
	W0108 19:31:15.686780   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:31:15.686850   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:31:15.703390   91210 logs.go:284] 0 containers: []
	W0108 19:31:15.703405   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:31:15.703477   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:31:15.722125   91210 logs.go:284] 0 containers: []
	W0108 19:31:15.722138   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:31:15.722206   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:31:15.739712   91210 logs.go:284] 0 containers: []
	W0108 19:31:15.739727   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:31:15.739794   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:31:15.758458   91210 logs.go:284] 0 containers: []
	W0108 19:31:15.758472   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:31:15.758538   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:31:15.776788   91210 logs.go:284] 0 containers: []
	W0108 19:31:15.776802   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:31:15.776810   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:31:15.776819   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:31:15.789375   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:31:15.789388   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:31:15.842419   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:31:15.842431   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:31:15.842439   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:31:15.856807   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:31:15.856821   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:31:15.904884   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:31:15.904900   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:31:18.441107   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:31:18.450728   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:31:18.469390   91210 logs.go:284] 0 containers: []
	W0108 19:31:18.469408   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:31:18.469479   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:31:18.487438   91210 logs.go:284] 0 containers: []
	W0108 19:31:18.487454   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:31:18.487529   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:31:18.505488   91210 logs.go:284] 0 containers: []
	W0108 19:31:18.505502   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:31:18.505569   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:31:18.523890   91210 logs.go:284] 0 containers: []
	W0108 19:31:18.523905   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:31:18.523995   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:31:18.542079   91210 logs.go:284] 0 containers: []
	W0108 19:31:18.542094   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:31:18.542169   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:31:18.561674   91210 logs.go:284] 0 containers: []
	W0108 19:31:18.561688   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:31:18.561757   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:31:18.580695   91210 logs.go:284] 0 containers: []
	W0108 19:31:18.580709   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:31:18.580781   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:31:18.600351   91210 logs.go:284] 0 containers: []
	W0108 19:31:18.600366   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:31:18.600374   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:31:18.600381   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:31:18.634898   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:31:18.634932   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:31:18.648092   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:31:18.648108   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:31:18.701336   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:31:18.701347   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:31:18.701356   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:31:18.715553   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:31:18.715568   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:31:21.266120   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:31:21.277743   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:31:21.296779   91210 logs.go:284] 0 containers: []
	W0108 19:31:21.296792   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:31:21.296859   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:31:21.315387   91210 logs.go:284] 0 containers: []
	W0108 19:31:21.315401   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:31:21.315465   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:31:21.332674   91210 logs.go:284] 0 containers: []
	W0108 19:31:21.332688   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:31:21.332759   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:31:21.351320   91210 logs.go:284] 0 containers: []
	W0108 19:31:21.351335   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:31:21.351413   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:31:21.370140   91210 logs.go:284] 0 containers: []
	W0108 19:31:21.370153   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:31:21.370231   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:31:21.388563   91210 logs.go:284] 0 containers: []
	W0108 19:31:21.388577   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:31:21.388657   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:31:21.409040   91210 logs.go:284] 0 containers: []
	W0108 19:31:21.409055   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:31:21.409133   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:31:21.430079   91210 logs.go:284] 0 containers: []
	W0108 19:31:21.430098   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:31:21.430107   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:31:21.430124   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:31:21.464896   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:31:21.464914   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:31:21.477599   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:31:21.477618   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:31:21.614521   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:31:21.614533   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:31:21.614541   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:31:21.628805   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:31:21.628820   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:31:24.181695   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:31:24.193172   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:31:24.210132   91210 logs.go:284] 0 containers: []
	W0108 19:31:24.210147   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:31:24.210218   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:31:24.228375   91210 logs.go:284] 0 containers: []
	W0108 19:31:24.228395   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:31:24.228459   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:31:24.247223   91210 logs.go:284] 0 containers: []
	W0108 19:31:24.247238   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:31:24.247307   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:31:24.266563   91210 logs.go:284] 0 containers: []
	W0108 19:31:24.266578   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:31:24.266649   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:31:24.293557   91210 logs.go:284] 0 containers: []
	W0108 19:31:24.293571   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:31:24.293646   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:31:24.310966   91210 logs.go:284] 0 containers: []
	W0108 19:31:24.310980   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:31:24.311049   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:31:24.330544   91210 logs.go:284] 0 containers: []
	W0108 19:31:24.330563   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:31:24.330692   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:31:24.351928   91210 logs.go:284] 0 containers: []
	W0108 19:31:24.351943   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:31:24.351950   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:31:24.351957   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:31:24.405160   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:31:24.405187   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:31:24.405203   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:31:24.422355   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:31:24.422372   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:31:24.523651   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:31:24.523665   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:31:24.561636   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:31:24.561654   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:31:27.075078   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:31:27.085532   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:31:27.104170   91210 logs.go:284] 0 containers: []
	W0108 19:31:27.104185   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:31:27.104252   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:31:27.122867   91210 logs.go:284] 0 containers: []
	W0108 19:31:27.122880   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:31:27.122958   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:31:27.140840   91210 logs.go:284] 0 containers: []
	W0108 19:31:27.140855   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:31:27.140922   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:31:27.159727   91210 logs.go:284] 0 containers: []
	W0108 19:31:27.159749   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:31:27.159844   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:31:27.177567   91210 logs.go:284] 0 containers: []
	W0108 19:31:27.177582   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:31:27.177647   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:31:27.196125   91210 logs.go:284] 0 containers: []
	W0108 19:31:27.196138   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:31:27.196205   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:31:27.213946   91210 logs.go:284] 0 containers: []
	W0108 19:31:27.213959   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:31:27.214028   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:31:27.232093   91210 logs.go:284] 0 containers: []
	W0108 19:31:27.232108   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:31:27.232116   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:31:27.232123   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:31:27.244530   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:31:27.244546   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:31:27.293618   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:31:27.293630   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:31:27.293640   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:31:27.308524   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:31:27.308538   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:31:27.358093   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:31:27.358108   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:31:29.894579   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:31:29.905567   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:31:29.924984   91210 logs.go:284] 0 containers: []
	W0108 19:31:29.925003   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:31:29.925091   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:31:29.945110   91210 logs.go:284] 0 containers: []
	W0108 19:31:29.945131   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:31:29.945207   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:31:29.965263   91210 logs.go:284] 0 containers: []
	W0108 19:31:29.965281   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:31:29.965352   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:31:29.983917   91210 logs.go:284] 0 containers: []
	W0108 19:31:29.983931   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:31:29.983999   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:31:30.014747   91210 logs.go:284] 0 containers: []
	W0108 19:31:30.014760   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:31:30.014827   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:31:30.033664   91210 logs.go:284] 0 containers: []
	W0108 19:31:30.033677   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:31:30.033777   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:31:30.051699   91210 logs.go:284] 0 containers: []
	W0108 19:31:30.051718   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:31:30.051794   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:31:30.068488   91210 logs.go:284] 0 containers: []
	W0108 19:31:30.068503   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:31:30.068511   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:31:30.068518   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:31:30.117197   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:31:30.117208   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:31:30.117216   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:31:30.131456   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:31:30.131470   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:31:30.179914   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:31:30.179929   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:31:30.213892   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:31:30.213907   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:31:32.728616   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:31:32.739749   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:31:32.757423   91210 logs.go:284] 0 containers: []
	W0108 19:31:32.757438   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:31:32.757508   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:31:32.775323   91210 logs.go:284] 0 containers: []
	W0108 19:31:32.775338   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:31:32.775407   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:31:32.794501   91210 logs.go:284] 0 containers: []
	W0108 19:31:32.794515   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:31:32.794584   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:31:32.811829   91210 logs.go:284] 0 containers: []
	W0108 19:31:32.811841   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:31:32.811908   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:31:32.830488   91210 logs.go:284] 0 containers: []
	W0108 19:31:32.830502   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:31:32.830574   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:31:32.847543   91210 logs.go:284] 0 containers: []
	W0108 19:31:32.847557   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:31:32.847622   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:31:32.866607   91210 logs.go:284] 0 containers: []
	W0108 19:31:32.866631   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:31:32.866722   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:31:32.884506   91210 logs.go:284] 0 containers: []
	W0108 19:31:32.884521   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:31:32.884528   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:31:32.884534   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:31:32.921675   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:31:32.921692   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:31:32.934980   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:31:32.934997   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:31:32.984231   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:31:32.984242   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:31:32.984249   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:31:32.998569   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:31:32.998583   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:31:35.551819   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:31:35.561942   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:31:35.579393   91210 logs.go:284] 0 containers: []
	W0108 19:31:35.579406   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:31:35.579474   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:31:35.596761   91210 logs.go:284] 0 containers: []
	W0108 19:31:35.596775   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:31:35.596845   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:31:35.613996   91210 logs.go:284] 0 containers: []
	W0108 19:31:35.614010   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:31:35.614079   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:31:35.633101   91210 logs.go:284] 0 containers: []
	W0108 19:31:35.633117   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:31:35.633185   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:31:35.652287   91210 logs.go:284] 0 containers: []
	W0108 19:31:35.652301   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:31:35.652367   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:31:35.669661   91210 logs.go:284] 0 containers: []
	W0108 19:31:35.669677   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:31:35.669745   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:31:35.686527   91210 logs.go:284] 0 containers: []
	W0108 19:31:35.686542   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:31:35.686633   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:31:35.705081   91210 logs.go:284] 0 containers: []
	W0108 19:31:35.705094   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:31:35.705104   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:31:35.705117   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:31:35.759790   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:31:35.759804   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:31:35.759812   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:31:35.774078   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:31:35.774092   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:31:35.825130   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:31:35.825147   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:31:35.859320   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:31:35.859335   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:31:38.372856   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:31:38.384455   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:31:38.403086   91210 logs.go:284] 0 containers: []
	W0108 19:31:38.403101   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:31:38.403169   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:31:38.421134   91210 logs.go:284] 0 containers: []
	W0108 19:31:38.421148   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:31:38.421213   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:31:38.440442   91210 logs.go:284] 0 containers: []
	W0108 19:31:38.440456   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:31:38.440521   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:31:38.459167   91210 logs.go:284] 0 containers: []
	W0108 19:31:38.459182   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:31:38.459249   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:31:38.479872   91210 logs.go:284] 0 containers: []
	W0108 19:31:38.479888   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:31:38.479958   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:31:38.497416   91210 logs.go:284] 0 containers: []
	W0108 19:31:38.497430   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:31:38.497511   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:31:38.515364   91210 logs.go:284] 0 containers: []
	W0108 19:31:38.515379   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:31:38.515450   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:31:38.533950   91210 logs.go:284] 0 containers: []
	W0108 19:31:38.533965   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:31:38.533973   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:31:38.533980   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:31:38.548548   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:31:38.548562   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:31:38.601079   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:31:38.601095   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:31:38.636847   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:31:38.636863   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:31:38.650299   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:31:38.650315   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:31:38.705116   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:31:41.206020   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:31:41.217417   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:31:41.235368   91210 logs.go:284] 0 containers: []
	W0108 19:31:41.235382   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:31:41.235448   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:31:41.254395   91210 logs.go:284] 0 containers: []
	W0108 19:31:41.254411   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:31:41.254499   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:31:41.271775   91210 logs.go:284] 0 containers: []
	W0108 19:31:41.271789   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:31:41.271856   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:31:41.289441   91210 logs.go:284] 0 containers: []
	W0108 19:31:41.289463   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:31:41.289543   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:31:41.307008   91210 logs.go:284] 0 containers: []
	W0108 19:31:41.307021   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:31:41.307086   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:31:41.325074   91210 logs.go:284] 0 containers: []
	W0108 19:31:41.325089   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:31:41.325159   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:31:41.343187   91210 logs.go:284] 0 containers: []
	W0108 19:31:41.343201   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:31:41.343264   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:31:41.361695   91210 logs.go:284] 0 containers: []
	W0108 19:31:41.361709   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:31:41.361716   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:31:41.361725   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:31:41.374086   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:31:41.374100   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:31:41.425512   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:31:41.425526   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:31:41.425535   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:31:41.440119   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:31:41.440133   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:31:41.493941   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:31:41.493956   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:31:44.029278   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:31:44.039815   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:31:44.058215   91210 logs.go:284] 0 containers: []
	W0108 19:31:44.058230   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:31:44.058298   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:31:44.078274   91210 logs.go:284] 0 containers: []
	W0108 19:31:44.078289   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:31:44.078355   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:31:44.095923   91210 logs.go:284] 0 containers: []
	W0108 19:31:44.095936   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:31:44.096002   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:31:44.114573   91210 logs.go:284] 0 containers: []
	W0108 19:31:44.114587   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:31:44.114656   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:31:44.135062   91210 logs.go:284] 0 containers: []
	W0108 19:31:44.135077   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:31:44.135146   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:31:44.154349   91210 logs.go:284] 0 containers: []
	W0108 19:31:44.154371   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:31:44.154473   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:31:44.174913   91210 logs.go:284] 0 containers: []
	W0108 19:31:44.174928   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:31:44.175001   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:31:44.213836   91210 logs.go:284] 0 containers: []
	W0108 19:31:44.213850   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:31:44.213858   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:31:44.213864   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:31:44.250971   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:31:44.250994   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:31:44.263854   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:31:44.263870   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:31:44.317424   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:31:44.317444   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:31:44.317453   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:31:44.331717   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:31:44.331733   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:31:46.882231   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:31:46.893674   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:31:46.910463   91210 logs.go:284] 0 containers: []
	W0108 19:31:46.910477   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:31:46.910551   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:31:46.929564   91210 logs.go:284] 0 containers: []
	W0108 19:31:46.929579   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:31:46.929645   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:31:46.947722   91210 logs.go:284] 0 containers: []
	W0108 19:31:46.947736   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:31:46.947810   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:31:46.964887   91210 logs.go:284] 0 containers: []
	W0108 19:31:46.964902   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:31:46.964971   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:31:46.983530   91210 logs.go:284] 0 containers: []
	W0108 19:31:46.983543   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:31:46.983612   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:31:47.001553   91210 logs.go:284] 0 containers: []
	W0108 19:31:47.001568   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:31:47.001638   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:31:47.019443   91210 logs.go:284] 0 containers: []
	W0108 19:31:47.019459   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:31:47.019527   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:31:47.037134   91210 logs.go:284] 0 containers: []
	W0108 19:31:47.037147   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:31:47.037155   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:31:47.037162   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:31:47.071236   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:31:47.071253   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:31:47.083879   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:31:47.083895   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:31:47.137854   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:31:47.137867   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:31:47.137875   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:31:47.152956   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:31:47.152972   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:31:49.728627   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:31:49.739887   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:31:49.759428   91210 logs.go:284] 0 containers: []
	W0108 19:31:49.759448   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:31:49.759516   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:31:49.780072   91210 logs.go:284] 0 containers: []
	W0108 19:31:49.799230   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:31:49.799326   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:31:49.818737   91210 logs.go:284] 0 containers: []
	W0108 19:31:49.818755   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:31:49.818843   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:31:49.841564   91210 logs.go:284] 0 containers: []
	W0108 19:31:49.841577   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:31:49.841646   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:31:49.860760   91210 logs.go:284] 0 containers: []
	W0108 19:31:49.860774   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:31:49.860846   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:31:49.880275   91210 logs.go:284] 0 containers: []
	W0108 19:31:49.880297   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:31:49.880397   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:31:49.900808   91210 logs.go:284] 0 containers: []
	W0108 19:31:49.900822   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:31:49.900893   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:31:49.919841   91210 logs.go:284] 0 containers: []
	W0108 19:31:49.919854   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:31:49.919862   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:31:49.919868   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:31:49.962250   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:31:49.962281   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:31:49.977514   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:31:49.977532   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:31:50.031523   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:31:50.031537   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:31:50.031547   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:31:50.047429   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:31:50.047449   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:31:52.601151   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:31:52.612423   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:31:52.631966   91210 logs.go:284] 0 containers: []
	W0108 19:31:52.631980   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:31:52.632053   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:31:52.653218   91210 logs.go:284] 0 containers: []
	W0108 19:31:52.653240   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:31:52.653335   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:31:52.673350   91210 logs.go:284] 0 containers: []
	W0108 19:31:52.673364   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:31:52.673444   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:31:52.695929   91210 logs.go:284] 0 containers: []
	W0108 19:31:52.695942   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:31:52.696015   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:31:52.716694   91210 logs.go:284] 0 containers: []
	W0108 19:31:52.716710   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:31:52.716786   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:31:52.738597   91210 logs.go:284] 0 containers: []
	W0108 19:31:52.738637   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:31:52.738738   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:31:52.759100   91210 logs.go:284] 0 containers: []
	W0108 19:31:52.759117   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:31:52.759187   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:31:52.781555   91210 logs.go:284] 0 containers: []
	W0108 19:31:52.781570   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:31:52.781578   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:31:52.781585   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:31:52.822079   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:31:52.822101   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:31:52.836053   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:31:52.836093   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:31:52.904406   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:31:52.904419   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:31:52.904431   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:31:52.920588   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:31:52.920606   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:31:55.479437   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:31:55.490294   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:31:55.510583   91210 logs.go:284] 0 containers: []
	W0108 19:31:55.510598   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:31:55.510672   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:31:55.532039   91210 logs.go:284] 0 containers: []
	W0108 19:31:55.532054   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:31:55.532142   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:31:55.554428   91210 logs.go:284] 0 containers: []
	W0108 19:31:55.554442   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:31:55.554515   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:31:55.572817   91210 logs.go:284] 0 containers: []
	W0108 19:31:55.572830   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:31:55.572905   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:31:55.592034   91210 logs.go:284] 0 containers: []
	W0108 19:31:55.592050   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:31:55.592136   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:31:55.612241   91210 logs.go:284] 0 containers: []
	W0108 19:31:55.612264   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:31:55.612336   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:31:55.632537   91210 logs.go:284] 0 containers: []
	W0108 19:31:55.632550   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:31:55.632622   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:31:55.652830   91210 logs.go:284] 0 containers: []
	W0108 19:31:55.652846   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:31:55.652853   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:31:55.652860   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:31:55.691733   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:31:55.691751   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:31:55.705128   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:31:55.705145   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:31:55.764307   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:31:55.764320   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:31:55.764328   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:31:55.780180   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:31:55.780196   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:31:58.341673   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:31:58.354852   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:31:58.373286   91210 logs.go:284] 0 containers: []
	W0108 19:31:58.373300   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:31:58.373379   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:31:58.390572   91210 logs.go:284] 0 containers: []
	W0108 19:31:58.390586   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:31:58.390656   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:31:58.410770   91210 logs.go:284] 0 containers: []
	W0108 19:31:58.410789   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:31:58.410925   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:31:58.430523   91210 logs.go:284] 0 containers: []
	W0108 19:31:58.430542   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:31:58.430679   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:31:58.456261   91210 logs.go:284] 0 containers: []
	W0108 19:31:58.456282   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:31:58.456388   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:31:58.476272   91210 logs.go:284] 0 containers: []
	W0108 19:31:58.476286   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:31:58.476353   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:31:58.494817   91210 logs.go:284] 0 containers: []
	W0108 19:31:58.494830   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:31:58.494902   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:31:58.513082   91210 logs.go:284] 0 containers: []
	W0108 19:31:58.513103   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:31:58.513111   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:31:58.513119   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:31:58.554523   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:31:58.554544   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:31:58.569933   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:31:58.569952   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:31:58.624365   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:31:58.624376   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:31:58.624384   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:31:58.639096   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:31:58.639111   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:32:01.195581   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:32:01.207148   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:32:01.223864   91210 logs.go:284] 0 containers: []
	W0108 19:32:01.223879   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:32:01.223946   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:32:01.246783   91210 logs.go:284] 0 containers: []
	W0108 19:32:01.246801   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:32:01.246869   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:32:01.265190   91210 logs.go:284] 0 containers: []
	W0108 19:32:01.265204   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:32:01.265273   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:32:01.283706   91210 logs.go:284] 0 containers: []
	W0108 19:32:01.283720   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:32:01.283789   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:32:01.300892   91210 logs.go:284] 0 containers: []
	W0108 19:32:01.300906   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:32:01.300974   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:32:01.319135   91210 logs.go:284] 0 containers: []
	W0108 19:32:01.319149   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:32:01.319215   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:32:01.336868   91210 logs.go:284] 0 containers: []
	W0108 19:32:01.336882   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:32:01.336951   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:32:01.354505   91210 logs.go:284] 0 containers: []
	W0108 19:32:01.354520   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:32:01.354527   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:32:01.354533   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:32:01.369359   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:32:01.369374   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:32:01.423148   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:32:01.423164   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:32:01.461770   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:32:01.461793   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:32:01.476671   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:32:01.476690   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:32:01.532377   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:32:04.032619   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:32:04.043767   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:32:04.060809   91210 logs.go:284] 0 containers: []
	W0108 19:32:04.060823   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:32:04.060895   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:32:04.078976   91210 logs.go:284] 0 containers: []
	W0108 19:32:04.078989   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:32:04.079056   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:32:04.098035   91210 logs.go:284] 0 containers: []
	W0108 19:32:04.098051   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:32:04.098130   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:32:04.116241   91210 logs.go:284] 0 containers: []
	W0108 19:32:04.116254   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:32:04.116320   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:32:04.133918   91210 logs.go:284] 0 containers: []
	W0108 19:32:04.133936   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:32:04.134027   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:32:04.152980   91210 logs.go:284] 0 containers: []
	W0108 19:32:04.152996   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:32:04.153062   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:32:04.170847   91210 logs.go:284] 0 containers: []
	W0108 19:32:04.170860   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:32:04.170946   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:32:04.188094   91210 logs.go:284] 0 containers: []
	W0108 19:32:04.188108   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:32:04.188115   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:32:04.188125   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:32:04.237061   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:32:04.237078   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:32:04.273622   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:32:04.273641   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:32:04.286563   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:32:04.286579   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:32:04.340255   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:32:04.340267   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:32:04.340275   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:32:06.855903   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:32:06.867822   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:32:06.886863   91210 logs.go:284] 0 containers: []
	W0108 19:32:06.886884   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:32:06.886969   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:32:06.905807   91210 logs.go:284] 0 containers: []
	W0108 19:32:06.905823   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:32:06.905905   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:32:06.925785   91210 logs.go:284] 0 containers: []
	W0108 19:32:06.925800   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:32:06.925867   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:32:06.943741   91210 logs.go:284] 0 containers: []
	W0108 19:32:06.943755   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:32:06.943819   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:32:06.961961   91210 logs.go:284] 0 containers: []
	W0108 19:32:06.961977   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:32:06.962052   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:32:06.979472   91210 logs.go:284] 0 containers: []
	W0108 19:32:06.979486   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:32:06.979555   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:32:06.998063   91210 logs.go:284] 0 containers: []
	W0108 19:32:06.998077   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:32:06.998147   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:32:07.016754   91210 logs.go:284] 0 containers: []
	W0108 19:32:07.016769   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:32:07.016777   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:32:07.016784   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:32:07.067089   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:32:07.067100   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:32:07.067108   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:32:07.081626   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:32:07.081641   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:32:07.130012   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:32:07.130028   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:32:07.165871   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:32:07.165889   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:32:09.679194   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:32:09.689277   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:32:09.708261   91210 logs.go:284] 0 containers: []
	W0108 19:32:09.708275   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:32:09.708340   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:32:09.726647   91210 logs.go:284] 0 containers: []
	W0108 19:32:09.726661   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:32:09.726728   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:32:09.745989   91210 logs.go:284] 0 containers: []
	W0108 19:32:09.746003   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:32:09.746069   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:32:09.763584   91210 logs.go:284] 0 containers: []
	W0108 19:32:09.763598   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:32:09.763666   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:32:09.781453   91210 logs.go:284] 0 containers: []
	W0108 19:32:09.798914   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:32:09.798998   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:32:09.818284   91210 logs.go:284] 0 containers: []
	W0108 19:32:09.818299   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:32:09.818367   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:32:09.838480   91210 logs.go:284] 0 containers: []
	W0108 19:32:09.838495   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:32:09.838563   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:32:09.855901   91210 logs.go:284] 0 containers: []
	W0108 19:32:09.855916   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:32:09.855924   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:32:09.855931   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:32:09.868974   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:32:09.868989   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:32:09.935043   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:32:09.935054   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:32:09.935061   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:32:09.949680   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:32:09.949696   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:32:10.017750   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:32:10.017766   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:32:12.555149   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:32:12.566615   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:32:12.585937   91210 logs.go:284] 0 containers: []
	W0108 19:32:12.585954   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:32:12.586029   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:32:12.605239   91210 logs.go:284] 0 containers: []
	W0108 19:32:12.605253   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:32:12.605339   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:32:12.624527   91210 logs.go:284] 0 containers: []
	W0108 19:32:12.624543   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:32:12.624615   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:32:12.644307   91210 logs.go:284] 0 containers: []
	W0108 19:32:12.644357   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:32:12.644484   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:32:12.665536   91210 logs.go:284] 0 containers: []
	W0108 19:32:12.665551   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:32:12.665623   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:32:12.705397   91210 logs.go:284] 0 containers: []
	W0108 19:32:12.705411   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:32:12.705484   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:32:12.723906   91210 logs.go:284] 0 containers: []
	W0108 19:32:12.723920   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:32:12.723995   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:32:12.741795   91210 logs.go:284] 0 containers: []
	W0108 19:32:12.741810   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:32:12.741818   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:32:12.741825   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:32:12.790005   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:32:12.790020   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:32:12.824299   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:32:12.824313   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:32:12.836534   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:32:12.836551   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:32:12.916237   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:32:12.916291   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:32:12.916324   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:32:15.433325   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:32:15.444695   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:32:15.464662   91210 logs.go:284] 0 containers: []
	W0108 19:32:15.464685   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:32:15.464773   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:32:15.485172   91210 logs.go:284] 0 containers: []
	W0108 19:32:15.485186   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:32:15.485254   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:32:15.503533   91210 logs.go:284] 0 containers: []
	W0108 19:32:15.503549   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:32:15.503643   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:32:15.522997   91210 logs.go:284] 0 containers: []
	W0108 19:32:15.523012   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:32:15.523080   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:32:15.541823   91210 logs.go:284] 0 containers: []
	W0108 19:32:15.541837   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:32:15.541897   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:32:15.560351   91210 logs.go:284] 0 containers: []
	W0108 19:32:15.560366   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:32:15.560431   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:32:15.579883   91210 logs.go:284] 0 containers: []
	W0108 19:32:15.579897   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:32:15.579967   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:32:15.598095   91210 logs.go:284] 0 containers: []
	W0108 19:32:15.598108   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:32:15.598115   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:32:15.598122   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:32:15.632685   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:32:15.632702   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:32:15.645545   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:32:15.645560   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:32:15.696837   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:32:15.696848   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:32:15.696856   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:32:15.711275   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:32:15.711291   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:32:18.265961   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:32:18.275378   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:32:18.293321   91210 logs.go:284] 0 containers: []
	W0108 19:32:18.293336   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:32:18.293407   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:32:18.310288   91210 logs.go:284] 0 containers: []
	W0108 19:32:18.310302   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:32:18.310368   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:32:18.328474   91210 logs.go:284] 0 containers: []
	W0108 19:32:18.328494   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:32:18.328578   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:32:18.350729   91210 logs.go:284] 0 containers: []
	W0108 19:32:18.350743   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:32:18.350844   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:32:18.371984   91210 logs.go:284] 0 containers: []
	W0108 19:32:18.372000   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:32:18.372065   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:32:18.390509   91210 logs.go:284] 0 containers: []
	W0108 19:32:18.390523   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:32:18.390595   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:32:18.409592   91210 logs.go:284] 0 containers: []
	W0108 19:32:18.409605   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:32:18.409688   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:32:18.430335   91210 logs.go:284] 0 containers: []
	W0108 19:32:18.430358   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:32:18.430372   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:32:18.430384   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:32:18.510173   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:32:18.510185   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:32:18.510192   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:32:18.524559   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:32:18.524573   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:32:18.588147   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:32:18.588165   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:32:18.623157   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:32:18.623171   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:32:21.141554   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:32:21.154127   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:32:21.175026   91210 logs.go:284] 0 containers: []
	W0108 19:32:21.175043   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:32:21.175123   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:32:21.194351   91210 logs.go:284] 0 containers: []
	W0108 19:32:21.194368   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:32:21.194437   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:32:21.214274   91210 logs.go:284] 0 containers: []
	W0108 19:32:21.214289   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:32:21.214356   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:32:21.233434   91210 logs.go:284] 0 containers: []
	W0108 19:32:21.233450   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:32:21.233516   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:32:21.253529   91210 logs.go:284] 0 containers: []
	W0108 19:32:21.253547   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:32:21.253630   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:32:21.273224   91210 logs.go:284] 0 containers: []
	W0108 19:32:21.273238   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:32:21.273309   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:32:21.296994   91210 logs.go:284] 0 containers: []
	W0108 19:32:21.297008   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:32:21.297083   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:32:21.316876   91210 logs.go:284] 0 containers: []
	W0108 19:32:21.316890   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:32:21.316897   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:32:21.316904   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:32:21.331674   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:32:21.331690   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:32:21.389704   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:32:21.389720   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:32:21.431202   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:32:21.431223   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:32:21.446273   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:32:21.446289   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:32:21.513687   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:32:24.013783   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:32:24.024288   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:32:24.043598   91210 logs.go:284] 0 containers: []
	W0108 19:32:24.043631   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:32:24.043748   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:32:24.067167   91210 logs.go:284] 0 containers: []
	W0108 19:32:24.067184   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:32:24.067258   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:32:24.089228   91210 logs.go:284] 0 containers: []
	W0108 19:32:24.089243   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:32:24.089317   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:32:24.113363   91210 logs.go:284] 0 containers: []
	W0108 19:32:24.113383   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:32:24.113466   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:32:24.136406   91210 logs.go:284] 0 containers: []
	W0108 19:32:24.136421   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:32:24.136492   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:32:24.157742   91210 logs.go:284] 0 containers: []
	W0108 19:32:24.157758   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:32:24.157860   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:32:24.179604   91210 logs.go:284] 0 containers: []
	W0108 19:32:24.179619   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:32:24.179694   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:32:24.200139   91210 logs.go:284] 0 containers: []
	W0108 19:32:24.200156   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:32:24.200167   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:32:24.200177   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:32:24.244568   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:32:24.244588   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:32:24.260701   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:32:24.260722   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:32:24.324391   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:32:24.324404   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:32:24.324412   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:32:24.340195   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:32:24.340212   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:32:26.895349   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:32:26.909751   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:32:26.940323   91210 logs.go:284] 0 containers: []
	W0108 19:32:26.940343   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:32:26.940433   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:32:26.964802   91210 logs.go:284] 0 containers: []
	W0108 19:32:26.964821   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:32:26.964902   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:32:27.008977   91210 logs.go:284] 0 containers: []
	W0108 19:32:27.008991   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:32:27.009059   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:32:27.031282   91210 logs.go:284] 0 containers: []
	W0108 19:32:27.031301   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:32:27.031390   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:32:27.069041   91210 logs.go:284] 0 containers: []
	W0108 19:32:27.069080   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:32:27.069200   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:32:27.093251   91210 logs.go:284] 0 containers: []
	W0108 19:32:27.093266   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:32:27.093332   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:32:27.115376   91210 logs.go:284] 0 containers: []
	W0108 19:32:27.115390   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:32:27.115489   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:32:27.136806   91210 logs.go:284] 0 containers: []
	W0108 19:32:27.136821   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:32:27.136829   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:32:27.136836   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:32:27.225734   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:32:27.225746   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:32:27.225754   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:32:27.244622   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:32:27.244641   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:32:27.300416   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:32:27.300439   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:32:27.346479   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:32:27.346503   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:32:29.863326   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:32:29.875362   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:32:29.898035   91210 logs.go:284] 0 containers: []
	W0108 19:32:29.898050   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:32:29.898163   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:32:29.921919   91210 logs.go:284] 0 containers: []
	W0108 19:32:29.921934   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:32:29.922013   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:32:29.945243   91210 logs.go:284] 0 containers: []
	W0108 19:32:29.945260   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:32:29.945348   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:32:29.971301   91210 logs.go:284] 0 containers: []
	W0108 19:32:29.971323   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:32:29.971416   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:32:30.006512   91210 logs.go:284] 0 containers: []
	W0108 19:32:30.006526   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:32:30.006603   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:32:30.027913   91210 logs.go:284] 0 containers: []
	W0108 19:32:30.027930   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:32:30.028008   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:32:30.049567   91210 logs.go:284] 0 containers: []
	W0108 19:32:30.049584   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:32:30.049657   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:32:30.073256   91210 logs.go:284] 0 containers: []
	W0108 19:32:30.073276   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:32:30.073287   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:32:30.073302   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:32:30.114524   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:32:30.114543   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:32:30.128696   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:32:30.128722   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:32:30.197631   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:32:30.197644   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:32:30.197689   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:32:30.213325   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:32:30.213339   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:32:32.772170   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:32:32.784162   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:32:32.806905   91210 logs.go:284] 0 containers: []
	W0108 19:32:32.806919   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:32:32.807000   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:32:32.828082   91210 logs.go:284] 0 containers: []
	W0108 19:32:32.828098   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:32:32.828176   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:32:32.854434   91210 logs.go:284] 0 containers: []
	W0108 19:32:32.854449   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:32:32.854521   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:32:32.924998   91210 logs.go:284] 0 containers: []
	W0108 19:32:32.925039   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:32:32.925223   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:32:32.957832   91210 logs.go:284] 0 containers: []
	W0108 19:32:32.957851   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:32:32.957951   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:32:33.006644   91210 logs.go:284] 0 containers: []
	W0108 19:32:33.006677   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:32:33.006785   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:32:33.032059   91210 logs.go:284] 0 containers: []
	W0108 19:32:33.032075   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:32:33.032162   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:32:33.101477   91210 logs.go:284] 0 containers: []
	W0108 19:32:33.101491   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:32:33.101498   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:32:33.101509   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:32:33.142960   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:32:33.142980   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:32:33.157499   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:32:33.157516   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:32:33.212575   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:32:33.212588   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:32:33.212596   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:32:33.228148   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:32:33.228165   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:32:35.793418   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:32:35.802842   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:32:35.820629   91210 logs.go:284] 0 containers: []
	W0108 19:32:35.820643   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:32:35.820710   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:32:35.838742   91210 logs.go:284] 0 containers: []
	W0108 19:32:35.838756   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:32:35.838827   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:32:35.858093   91210 logs.go:284] 0 containers: []
	W0108 19:32:35.858107   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:32:35.858170   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:32:35.876071   91210 logs.go:284] 0 containers: []
	W0108 19:32:35.876087   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:32:35.876160   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:32:35.892875   91210 logs.go:284] 0 containers: []
	W0108 19:32:35.892890   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:32:35.892958   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:32:35.910142   91210 logs.go:284] 0 containers: []
	W0108 19:32:35.910156   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:32:35.910241   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:32:35.928903   91210 logs.go:284] 0 containers: []
	W0108 19:32:35.928917   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:32:35.928985   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:32:35.946382   91210 logs.go:284] 0 containers: []
	W0108 19:32:35.946397   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:32:35.946405   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:32:35.946414   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:32:35.982969   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:32:35.982984   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:32:35.995527   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:32:35.995544   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:32:36.047320   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:32:36.047332   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:32:36.047341   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:32:36.061533   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:32:36.061548   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:32:38.609754   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:32:38.621128   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:32:38.641056   91210 logs.go:284] 0 containers: []
	W0108 19:32:38.641070   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:32:38.641136   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:32:38.658636   91210 logs.go:284] 0 containers: []
	W0108 19:32:38.658651   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:32:38.658718   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:32:38.678739   91210 logs.go:284] 0 containers: []
	W0108 19:32:38.678753   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:32:38.678821   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:32:38.696061   91210 logs.go:284] 0 containers: []
	W0108 19:32:38.696076   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:32:38.696144   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:32:38.714227   91210 logs.go:284] 0 containers: []
	W0108 19:32:38.714256   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:32:38.714328   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:32:38.731474   91210 logs.go:284] 0 containers: []
	W0108 19:32:38.731489   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:32:38.731557   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:32:38.749192   91210 logs.go:284] 0 containers: []
	W0108 19:32:38.749214   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:32:38.749291   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:32:38.767909   91210 logs.go:284] 0 containers: []
	W0108 19:32:38.767924   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:32:38.767932   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:32:38.767938   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:32:38.803326   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:32:38.803344   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:32:38.816866   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:32:38.816882   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:32:38.873356   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:32:38.873366   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:32:38.873374   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:32:38.888706   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:32:38.888722   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:32:41.440386   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:32:41.451797   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:32:41.476708   91210 logs.go:284] 0 containers: []
	W0108 19:32:41.476722   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:32:41.476800   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:32:41.505196   91210 logs.go:284] 0 containers: []
	W0108 19:32:41.505222   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:32:41.505337   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:32:41.529966   91210 logs.go:284] 0 containers: []
	W0108 19:32:41.529987   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:32:41.530124   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:32:41.562602   91210 logs.go:284] 0 containers: []
	W0108 19:32:41.562619   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:32:41.562697   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:32:41.585632   91210 logs.go:284] 0 containers: []
	W0108 19:32:41.585648   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:32:41.585714   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:32:41.606238   91210 logs.go:284] 0 containers: []
	W0108 19:32:41.606252   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:32:41.606309   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:32:41.625225   91210 logs.go:284] 0 containers: []
	W0108 19:32:41.625243   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:32:41.625358   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:32:41.659990   91210 logs.go:284] 0 containers: []
	W0108 19:32:41.660020   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:32:41.660032   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:32:41.660041   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:32:41.703877   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:32:41.703896   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:32:41.717354   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:32:41.717369   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:32:41.783683   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:32:41.783694   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:32:41.783702   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:32:41.798244   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:32:41.798257   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 19:32:44.366358   91210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:32:44.379399   91210 kubeadm.go:640] restartCluster took 4m13.739735008s
	W0108 19:32:44.379443   91210 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0108 19:32:44.379460   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0108 19:32:44.792628   91210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 19:32:44.808856   91210 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 19:32:44.817703   91210 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0108 19:32:44.817758   91210 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 19:32:44.826401   91210 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 19:32:44.826428   91210 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 19:32:44.880191   91210 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0108 19:32:44.880387   91210 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 19:32:45.139094   91210 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 19:32:45.139224   91210 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 19:32:45.139314   91210 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 19:32:45.320299   91210 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 19:32:45.328263   91210 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 19:32:45.335627   91210 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0108 19:32:45.397599   91210 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 19:32:45.439421   91210 out.go:204]   - Generating certificates and keys ...
	I0108 19:32:45.439511   91210 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 19:32:45.439576   91210 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 19:32:45.439658   91210 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 19:32:45.439708   91210 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0108 19:32:45.439761   91210 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 19:32:45.439801   91210 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0108 19:32:45.439860   91210 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0108 19:32:45.439908   91210 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0108 19:32:45.439964   91210 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 19:32:45.440027   91210 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 19:32:45.440059   91210 kubeadm.go:322] [certs] Using the existing "sa" key
	I0108 19:32:45.440101   91210 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 19:32:45.851100   91210 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 19:32:45.980351   91210 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 19:32:46.125615   91210 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 19:32:46.173338   91210 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 19:32:46.174071   91210 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 19:32:46.194992   91210 out.go:204]   - Booting up control plane ...
	I0108 19:32:46.195117   91210 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 19:32:46.195198   91210 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 19:32:46.195268   91210 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 19:32:46.195370   91210 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 19:32:46.195525   91210 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 19:33:26.183247   91210 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0108 19:33:26.184523   91210 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:33:26.184740   91210 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:33:31.185056   91210 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:33:31.185233   91210 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:33:41.186247   91210 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:33:41.186457   91210 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:34:01.186507   91210 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:34:01.186657   91210 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:34:41.187505   91210 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:34:41.187748   91210 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:34:41.187769   91210 kubeadm.go:322] 
	I0108 19:34:41.187821   91210 kubeadm.go:322] Unfortunately, an error has occurred:
	I0108 19:34:41.187859   91210 kubeadm.go:322] 	timed out waiting for the condition
	I0108 19:34:41.187864   91210 kubeadm.go:322] 
	I0108 19:34:41.187920   91210 kubeadm.go:322] This error is likely caused by:
	I0108 19:34:41.187965   91210 kubeadm.go:322] 	- The kubelet is not running
	I0108 19:34:41.188083   91210 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0108 19:34:41.188098   91210 kubeadm.go:322] 
	I0108 19:34:41.188205   91210 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0108 19:34:41.188245   91210 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0108 19:34:41.188288   91210 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0108 19:34:41.188303   91210 kubeadm.go:322] 
	I0108 19:34:41.188419   91210 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0108 19:34:41.188529   91210 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0108 19:34:41.188718   91210 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0108 19:34:41.188787   91210 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0108 19:34:41.188889   91210 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0108 19:34:41.188919   91210 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0108 19:34:41.190338   91210 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0108 19:34:41.190401   91210 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0108 19:34:41.190554   91210 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0108 19:34:41.190650   91210 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 19:34:41.190730   91210 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0108 19:34:41.190794   91210 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0108 19:34:41.190883   91210 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0108 19:34:41.190946   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0108 19:34:41.617943   91210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 19:34:41.628905   91210 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0108 19:34:41.628969   91210 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 19:34:41.638305   91210 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 19:34:41.638331   91210 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 19:34:41.692220   91210 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0108 19:34:41.692258   91210 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 19:34:41.930803   91210 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 19:34:41.930889   91210 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 19:34:41.930979   91210 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 19:34:42.106443   91210 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 19:34:42.107389   91210 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 19:34:42.114012   91210 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0108 19:34:42.185438   91210 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 19:34:42.206782   91210 out.go:204]   - Generating certificates and keys ...
	I0108 19:34:42.206877   91210 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 19:34:42.206951   91210 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 19:34:42.207041   91210 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 19:34:42.207090   91210 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0108 19:34:42.207193   91210 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 19:34:42.207259   91210 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0108 19:34:42.207317   91210 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0108 19:34:42.207423   91210 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0108 19:34:42.207536   91210 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 19:34:42.207657   91210 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 19:34:42.207710   91210 kubeadm.go:322] [certs] Using the existing "sa" key
	I0108 19:34:42.207772   91210 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 19:34:42.256503   91210 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 19:34:42.366786   91210 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 19:34:42.419315   91210 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 19:34:42.528886   91210 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 19:34:42.529503   91210 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 19:34:42.550657   91210 out.go:204]   - Booting up control plane ...
	I0108 19:34:42.550729   91210 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 19:34:42.550797   91210 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 19:34:42.550862   91210 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 19:34:42.550939   91210 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 19:34:42.551054   91210 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 19:35:22.538567   91210 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0108 19:35:22.539507   91210 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:35:22.539746   91210 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:35:27.540987   91210 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:35:27.541212   91210 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:35:37.541323   91210 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:35:37.541494   91210 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:35:57.543243   91210 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:35:57.543438   91210 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:36:37.550355   91210 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:36:37.550576   91210 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:36:37.550594   91210 kubeadm.go:322] 
	I0108 19:36:37.550649   91210 kubeadm.go:322] Unfortunately, an error has occurred:
	I0108 19:36:37.550695   91210 kubeadm.go:322] 	timed out waiting for the condition
	I0108 19:36:37.550700   91210 kubeadm.go:322] 
	I0108 19:36:37.550735   91210 kubeadm.go:322] This error is likely caused by:
	I0108 19:36:37.550793   91210 kubeadm.go:322] 	- The kubelet is not running
	I0108 19:36:37.550964   91210 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0108 19:36:37.550980   91210 kubeadm.go:322] 
	I0108 19:36:37.551115   91210 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0108 19:36:37.551157   91210 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0108 19:36:37.551203   91210 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0108 19:36:37.551220   91210 kubeadm.go:322] 
	I0108 19:36:37.551349   91210 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0108 19:36:37.551462   91210 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0108 19:36:37.551572   91210 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0108 19:36:37.551615   91210 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0108 19:36:37.551674   91210 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0108 19:36:37.551706   91210 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0108 19:36:37.552925   91210 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0108 19:36:37.552981   91210 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0108 19:36:37.553097   91210 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0108 19:36:37.553186   91210 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 19:36:37.553253   91210 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0108 19:36:37.553328   91210 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0108 19:36:37.553351   91210 kubeadm.go:406] StartCluster complete in 8m6.940239728s
	I0108 19:36:37.553443   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:36:37.571114   91210 logs.go:284] 0 containers: []
	W0108 19:36:37.571128   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:36:37.571200   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:36:37.588233   91210 logs.go:284] 0 containers: []
	W0108 19:36:37.588247   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:36:37.588322   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:36:37.606954   91210 logs.go:284] 0 containers: []
	W0108 19:36:37.606968   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:36:37.607041   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:36:37.626517   91210 logs.go:284] 0 containers: []
	W0108 19:36:37.626530   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:36:37.626601   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:36:37.644598   91210 logs.go:284] 0 containers: []
	W0108 19:36:37.644612   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:36:37.644682   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:36:37.664608   91210 logs.go:284] 0 containers: []
	W0108 19:36:37.664623   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:36:37.664691   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:36:37.683160   91210 logs.go:284] 0 containers: []
	W0108 19:36:37.683173   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:36:37.683241   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:36:37.701492   91210 logs.go:284] 0 containers: []
	W0108 19:36:37.701505   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:36:37.701513   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:36:37.701519   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:36:37.736029   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:36:37.736042   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:36:37.748656   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:36:37.748671   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:36:37.798012   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:36:37.798025   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:36:37.798037   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:36:37.812447   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:36:37.812462   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0108 19:36:37.862822   91210 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0108 19:36:37.862853   91210 out.go:239] * 
	* 
	W0108 19:36:37.862890   91210 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0108 19:36:37.862905   91210 out.go:239] * 
	* 
	W0108 19:36:37.863550   91210 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 19:36:37.915444   91210 out.go:177] 
	W0108 19:36:37.957664   91210 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0108 19:36:37.957720   91210 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0108 19:36:37.957737   91210 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0108 19:36:38.020865   91210 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-901000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-901000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-901000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c",
	        "Created": "2024-01-09T03:22:27.685275696Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 319627,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-09T03:28:15.901372485Z",
	            "FinishedAt": "2024-01-09T03:28:13.139361168Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c/hostname",
	        "HostsPath": "/var/lib/docker/containers/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c/hosts",
	        "LogPath": "/var/lib/docker/containers/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c-json.log",
	        "Name": "/old-k8s-version-901000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-901000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-901000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/511bcc6ca847d5924ffe005728f7ed77c8094a92053674c964ad837af718051a-init/diff:/var/lib/docker/overlay2/60277c56cb2e84cbe47fd8ed3c79b85a017889e24b19778a8fc4b14c01478988/diff",
	                "MergedDir": "/var/lib/docker/overlay2/511bcc6ca847d5924ffe005728f7ed77c8094a92053674c964ad837af718051a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/511bcc6ca847d5924ffe005728f7ed77c8094a92053674c964ad837af718051a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/511bcc6ca847d5924ffe005728f7ed77c8094a92053674c964ad837af718051a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-901000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-901000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-901000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-901000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-901000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7496a8cb9d7ae61048a11417a73137893d8a3461fad23af3b458647a8274e070",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50187"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50188"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50184"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50185"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50186"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7496a8cb9d7a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-901000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "aa25a1062c36",
	                        "old-k8s-version-901000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "876d1b6b5bcffa0a183a1c34f9924af9d72a7d63d67d2b9f07e88b4f08db4216",
	                    "EndpointID": "6b095a8900da5b41d0ebf6974e7816d2fdca4786b0382b85e41c193a78415ea4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-901000 -n old-k8s-version-901000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-901000 -n old-k8s-version-901000: exit status 2 (387.633471ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-901000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-901000 logs -n 25: (1.37224292s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kubenet-798000 sudo                                 | kubenet-798000         | jenkins | v1.32.0 | 08 Jan 24 19:23 PST | 08 Jan 24 19:23 PST |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p kubenet-798000 sudo                                 | kubenet-798000         | jenkins | v1.32.0 | 08 Jan 24 19:23 PST |                     |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p kubenet-798000 sudo                                 | kubenet-798000         | jenkins | v1.32.0 | 08 Jan 24 19:23 PST | 08 Jan 24 19:23 PST |
	|         | systemctl cat crio --no-pager                          |                        |         |         |                     |                     |
	| ssh     | -p kubenet-798000 sudo find                            | kubenet-798000         | jenkins | v1.32.0 | 08 Jan 24 19:23 PST | 08 Jan 24 19:23 PST |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p kubenet-798000 sudo crio                            | kubenet-798000         | jenkins | v1.32.0 | 08 Jan 24 19:23 PST | 08 Jan 24 19:23 PST |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p kubenet-798000                                      | kubenet-798000         | jenkins | v1.32.0 | 08 Jan 24 19:23 PST | 08 Jan 24 19:23 PST |
	| start   | -p no-preload-363000                                   | no-preload-363000      | jenkins | v1.32.0 | 08 Jan 24 19:23 PST | 08 Jan 24 19:26 PST |
	|         | --memory=2200 --alsologtostderr                        |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-363000             | no-preload-363000      | jenkins | v1.32.0 | 08 Jan 24 19:26 PST | 08 Jan 24 19:26 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p no-preload-363000                                   | no-preload-363000      | jenkins | v1.32.0 | 08 Jan 24 19:26 PST | 08 Jan 24 19:26 PST |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-901000        | old-k8s-version-901000 | jenkins | v1.32.0 | 08 Jan 24 19:26 PST |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-363000                  | no-preload-363000      | jenkins | v1.32.0 | 08 Jan 24 19:26 PST | 08 Jan 24 19:26 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-363000                                   | no-preload-363000      | jenkins | v1.32.0 | 08 Jan 24 19:26 PST | 08 Jan 24 19:32 PST |
	|         | --memory=2200 --alsologtostderr                        |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-901000                              | old-k8s-version-901000 | jenkins | v1.32.0 | 08 Jan 24 19:28 PST | 08 Jan 24 19:28 PST |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-901000             | old-k8s-version-901000 | jenkins | v1.32.0 | 08 Jan 24 19:28 PST | 08 Jan 24 19:28 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-901000                              | old-k8s-version-901000 | jenkins | v1.32.0 | 08 Jan 24 19:28 PST |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                        |         |         |                     |                     |
	| image   | no-preload-363000 image list                           | no-preload-363000      | jenkins | v1.32.0 | 08 Jan 24 19:32 PST | 08 Jan 24 19:32 PST |
	|         | --format=json                                          |                        |         |         |                     |                     |
	| pause   | -p no-preload-363000                                   | no-preload-363000      | jenkins | v1.32.0 | 08 Jan 24 19:32 PST | 08 Jan 24 19:32 PST |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| unpause | -p no-preload-363000                                   | no-preload-363000      | jenkins | v1.32.0 | 08 Jan 24 19:32 PST | 08 Jan 24 19:32 PST |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| delete  | -p no-preload-363000                                   | no-preload-363000      | jenkins | v1.32.0 | 08 Jan 24 19:32 PST | 08 Jan 24 19:32 PST |
	| delete  | -p no-preload-363000                                   | no-preload-363000      | jenkins | v1.32.0 | 08 Jan 24 19:32 PST | 08 Jan 24 19:32 PST |
	| start   | -p embed-certs-689000                                  | embed-certs-689000     | jenkins | v1.32.0 | 08 Jan 24 19:32 PST | 08 Jan 24 19:34 PST |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-689000            | embed-certs-689000     | jenkins | v1.32.0 | 08 Jan 24 19:34 PST | 08 Jan 24 19:34 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p embed-certs-689000                                  | embed-certs-689000     | jenkins | v1.32.0 | 08 Jan 24 19:34 PST | 08 Jan 24 19:34 PST |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-689000                 | embed-certs-689000     | jenkins | v1.32.0 | 08 Jan 24 19:34 PST | 08 Jan 24 19:34 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p embed-certs-689000                                  | embed-certs-689000     | jenkins | v1.32.0 | 08 Jan 24 19:34 PST |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                        |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 19:34:22
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 19:34:22.986344   91672 out.go:296] Setting OutFile to fd 1 ...
	I0108 19:34:22.986745   91672 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 19:34:22.986750   91672 out.go:309] Setting ErrFile to fd 2...
	I0108 19:34:22.986754   91672 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 19:34:22.986947   91672 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17866-74927/.minikube/bin
	I0108 19:34:22.988533   91672 out.go:303] Setting JSON to false
	I0108 19:34:23.012679   91672 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":38034,"bootTime":1704733228,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0108 19:34:23.012793   91672 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0108 19:34:23.034849   91672 out.go:177] * [embed-certs-689000] minikube v1.32.0 on Darwin 14.2.1
	I0108 19:34:23.056554   91672 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 19:34:23.056651   91672 notify.go:220] Checking for updates...
	I0108 19:34:23.077852   91672 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17866-74927/kubeconfig
	I0108 19:34:23.099684   91672 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 19:34:23.120477   91672 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 19:34:23.141658   91672 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube
	I0108 19:34:23.162694   91672 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 19:34:23.184197   91672 config.go:182] Loaded profile config "embed-certs-689000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 19:34:23.184915   91672 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 19:34:23.243261   91672 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0108 19:34:23.243426   91672 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 19:34:23.344826   91672 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:68 SystemTime:2024-01-09 03:34:23.334419963 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0108 19:34:23.387393   91672 out.go:177] * Using the docker driver based on existing profile
	I0108 19:34:23.408322   91672 start.go:298] selected driver: docker
	I0108 19:34:23.408341   91672 start.go:902] validating driver "docker" against &{Name:embed-certs-689000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-689000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 19:34:23.408412   91672 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 19:34:23.411543   91672 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 19:34:23.514129   91672 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:68 SystemTime:2024-01-09 03:34:23.5040666 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/doc
ker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0108 19:34:23.514363   91672 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 19:34:23.514433   91672 cni.go:84] Creating CNI manager for ""
	I0108 19:34:23.514446   91672 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0108 19:34:23.514456   91672 start_flags.go:321] config:
	{Name:embed-certs-689000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-689000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 19:34:23.557410   91672 out.go:177] * Starting control plane node embed-certs-689000 in cluster embed-certs-689000
	I0108 19:34:23.578295   91672 cache.go:121] Beginning downloading kic base image for docker with docker
	I0108 19:34:23.599287   91672 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0108 19:34:23.641240   91672 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0108 19:34:23.641287   91672 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0108 19:34:23.641291   91672 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0108 19:34:23.641301   91672 cache.go:56] Caching tarball of preloaded images
	I0108 19:34:23.641430   91672 preload.go:174] Found /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 19:34:23.641440   91672 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0108 19:34:23.641924   91672 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/embed-certs-689000/config.json ...
	I0108 19:34:23.692199   91672 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0108 19:34:23.692222   91672 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0108 19:34:23.692247   91672 cache.go:194] Successfully downloaded all kic artifacts
	I0108 19:34:23.692293   91672 start.go:365] acquiring machines lock for embed-certs-689000: {Name:mkf9fe2b53cac616cb8119cd51ecda7abcb952c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 19:34:23.692386   91672 start.go:369] acquired machines lock for "embed-certs-689000" in 72.206µs
	I0108 19:34:23.692407   91672 start.go:96] Skipping create...Using existing machine configuration
	I0108 19:34:23.692415   91672 fix.go:54] fixHost starting: 
	I0108 19:34:23.692653   91672 cli_runner.go:164] Run: docker container inspect embed-certs-689000 --format={{.State.Status}}
	I0108 19:34:23.743995   91672 fix.go:102] recreateIfNeeded on embed-certs-689000: state=Stopped err=<nil>
	W0108 19:34:23.744031   91672 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 19:34:23.765791   91672 out.go:177] * Restarting existing docker container for "embed-certs-689000" ...
	I0108 19:34:23.787397   91672 cli_runner.go:164] Run: docker start embed-certs-689000
	I0108 19:34:24.040550   91672 cli_runner.go:164] Run: docker container inspect embed-certs-689000 --format={{.State.Status}}
	I0108 19:34:24.094597   91672 kic.go:430] container "embed-certs-689000" state is running.
	I0108 19:34:24.095236   91672 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-689000
	I0108 19:34:24.151352   91672 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/embed-certs-689000/config.json ...
	I0108 19:34:24.151800   91672 machine.go:88] provisioning docker machine ...
	I0108 19:34:24.151830   91672 ubuntu.go:169] provisioning hostname "embed-certs-689000"
	I0108 19:34:24.151904   91672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-689000
	I0108 19:34:24.217815   91672 main.go:141] libmachine: Using SSH client type: native
	I0108 19:34:24.218252   91672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 50315 <nil> <nil>}
	I0108 19:34:24.218272   91672 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-689000 && echo "embed-certs-689000" | sudo tee /etc/hostname
	I0108 19:34:24.219585   91672 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0108 19:34:27.365442   91672 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-689000
	
	I0108 19:34:27.365548   91672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-689000
	I0108 19:34:27.419549   91672 main.go:141] libmachine: Using SSH client type: native
	I0108 19:34:27.419877   91672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 50315 <nil> <nil>}
	I0108 19:34:27.419891   91672 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-689000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-689000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-689000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 19:34:27.554322   91672 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 19:34:27.554341   91672 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17866-74927/.minikube CaCertPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17866-74927/.minikube}
	I0108 19:34:27.554361   91672 ubuntu.go:177] setting up certificates
	I0108 19:34:27.554371   91672 provision.go:83] configureAuth start
	I0108 19:34:27.554448   91672 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-689000
	I0108 19:34:27.606183   91672 provision.go:138] copyHostCerts
	I0108 19:34:27.606292   91672 exec_runner.go:144] found /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.pem, removing ...
	I0108 19:34:27.606301   91672 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.pem
	I0108 19:34:27.606470   91672 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.pem (1078 bytes)
	I0108 19:34:27.606753   91672 exec_runner.go:144] found /Users/jenkins/minikube-integration/17866-74927/.minikube/cert.pem, removing ...
	I0108 19:34:27.606764   91672 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17866-74927/.minikube/cert.pem
	I0108 19:34:27.606865   91672 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17866-74927/.minikube/cert.pem (1123 bytes)
	I0108 19:34:27.607075   91672 exec_runner.go:144] found /Users/jenkins/minikube-integration/17866-74927/.minikube/key.pem, removing ...
	I0108 19:34:27.607089   91672 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17866-74927/.minikube/key.pem
	I0108 19:34:27.607170   91672 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17866-74927/.minikube/key.pem (1679 bytes)
	I0108 19:34:27.607322   91672 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca-key.pem org=jenkins.embed-certs-689000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-689000]
	I0108 19:34:27.704699   91672 provision.go:172] copyRemoteCerts
	I0108 19:34:27.704761   91672 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 19:34:27.704821   91672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-689000
	I0108 19:34:27.756281   91672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50315 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/embed-certs-689000/id_rsa Username:docker}
	I0108 19:34:27.851314   91672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 19:34:27.871684   91672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0108 19:34:27.892122   91672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 19:34:27.912305   91672 provision.go:86] duration metric: configureAuth took 357.927619ms
	I0108 19:34:27.912320   91672 ubuntu.go:193] setting minikube options for container-runtime
	I0108 19:34:27.912466   91672 config.go:182] Loaded profile config "embed-certs-689000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 19:34:27.912527   91672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-689000
	I0108 19:34:27.963879   91672 main.go:141] libmachine: Using SSH client type: native
	I0108 19:34:27.964165   91672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 50315 <nil> <nil>}
	I0108 19:34:27.964175   91672 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 19:34:28.100581   91672 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0108 19:34:28.100601   91672 ubuntu.go:71] root file system type: overlay
	I0108 19:34:28.100683   91672 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 19:34:28.100772   91672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-689000
	I0108 19:34:28.153728   91672 main.go:141] libmachine: Using SSH client type: native
	I0108 19:34:28.154044   91672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 50315 <nil> <nil>}
	I0108 19:34:28.154098   91672 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 19:34:28.299411   91672 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 19:34:28.299531   91672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-689000
	I0108 19:34:28.351362   91672 main.go:141] libmachine: Using SSH client type: native
	I0108 19:34:28.351663   91672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 50315 <nil> <nil>}
	I0108 19:34:28.351676   91672 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 19:34:28.491898   91672 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 19:34:28.491924   91672 machine.go:91] provisioned docker machine in 4.340212286s
	I0108 19:34:28.491930   91672 start.go:300] post-start starting for "embed-certs-689000" (driver="docker")
	I0108 19:34:28.491938   91672 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 19:34:28.492033   91672 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 19:34:28.492100   91672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-689000
	I0108 19:34:28.544310   91672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50315 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/embed-certs-689000/id_rsa Username:docker}
	I0108 19:34:28.640594   91672 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 19:34:28.644444   91672 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 19:34:28.644465   91672 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 19:34:28.644472   91672 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 19:34:28.644478   91672 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0108 19:34:28.644490   91672 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17866-74927/.minikube/addons for local assets ...
	I0108 19:34:28.644582   91672 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17866-74927/.minikube/files for local assets ...
	I0108 19:34:28.644788   91672 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem -> 753692.pem in /etc/ssl/certs
	I0108 19:34:28.645001   91672 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 19:34:28.653050   91672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem --> /etc/ssl/certs/753692.pem (1708 bytes)
	I0108 19:34:28.673019   91672 start.go:303] post-start completed in 181.081947ms
	I0108 19:34:28.673111   91672 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 19:34:28.673166   91672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-689000
	I0108 19:34:28.724529   91672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50315 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/embed-certs-689000/id_rsa Username:docker}
	I0108 19:34:28.815519   91672 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 19:34:28.820301   91672 fix.go:56] fixHost completed within 5.127996319s
	I0108 19:34:28.820321   91672 start.go:83] releasing machines lock for "embed-certs-689000", held for 5.128039059s
	I0108 19:34:28.820418   91672 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-689000
	I0108 19:34:28.872101   91672 ssh_runner.go:195] Run: cat /version.json
	I0108 19:34:28.872185   91672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-689000
	I0108 19:34:28.873107   91672 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 19:34:28.873309   91672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-689000
	I0108 19:34:28.927849   91672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50315 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/embed-certs-689000/id_rsa Username:docker}
	I0108 19:34:28.927845   91672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50315 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/embed-certs-689000/id_rsa Username:docker}
	I0108 19:34:29.139529   91672 ssh_runner.go:195] Run: systemctl --version
	I0108 19:34:29.144554   91672 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 19:34:29.149589   91672 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0108 19:34:29.166204   91672 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0108 19:34:29.166272   91672 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 19:34:29.174984   91672 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0108 19:34:29.174999   91672 start.go:475] detecting cgroup driver to use...
	I0108 19:34:29.175011   91672 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 19:34:29.175122   91672 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 19:34:29.189898   91672 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0108 19:34:29.199445   91672 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0108 19:34:29.208789   91672 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0108 19:34:29.208860   91672 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0108 19:34:29.218362   91672 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 19:34:29.227720   91672 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0108 19:34:29.236919   91672 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 19:34:29.246064   91672 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 19:34:29.254996   91672 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0108 19:34:29.264296   91672 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 19:34:29.272228   91672 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 19:34:29.280192   91672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 19:34:29.327887   91672 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 19:34:29.406464   91672 start.go:475] detecting cgroup driver to use...
	I0108 19:34:29.406484   91672 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 19:34:29.406551   91672 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 19:34:29.427689   91672 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0108 19:34:29.427769   91672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 19:34:29.439199   91672 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 19:34:29.455167   91672 ssh_runner.go:195] Run: which cri-dockerd
	I0108 19:34:29.459612   91672 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0108 19:34:29.468935   91672 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0108 19:34:29.493157   91672 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 19:34:29.639320   91672 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 19:34:29.725457   91672 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I0108 19:34:29.725547   91672 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0108 19:34:29.741424   91672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 19:34:29.792076   91672 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 19:34:30.098651   91672 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0108 19:34:30.156791   91672 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0108 19:34:30.214843   91672 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0108 19:34:30.268509   91672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 19:34:30.321267   91672 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0108 19:34:30.344654   91672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 19:34:30.402152   91672 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0108 19:34:30.483016   91672 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0108 19:34:30.483113   91672 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0108 19:34:30.487858   91672 start.go:543] Will wait 60s for crictl version
	I0108 19:34:30.487924   91672 ssh_runner.go:195] Run: which crictl
	I0108 19:34:30.492165   91672 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 19:34:30.540171   91672 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0108 19:34:30.540246   91672 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 19:34:30.563885   91672 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 19:34:30.611062   91672 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0108 19:34:30.611199   91672 cli_runner.go:164] Run: docker exec -t embed-certs-689000 dig +short host.docker.internal
	I0108 19:34:30.727698   91672 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0108 19:34:30.727830   91672 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0108 19:34:30.732461   91672 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 19:34:30.743009   91672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-689000
	I0108 19:34:30.794585   91672 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0108 19:34:30.794656   91672 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 19:34:30.813794   91672 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0108 19:34:30.813818   91672 docker.go:601] Images already preloaded, skipping extraction
	I0108 19:34:30.813904   91672 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 19:34:30.832094   91672 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0108 19:34:30.832112   91672 cache_images.go:84] Images are preloaded, skipping loading
	I0108 19:34:30.832208   91672 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0108 19:34:30.881524   91672 cni.go:84] Creating CNI manager for ""
	I0108 19:34:30.881541   91672 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0108 19:34:30.881559   91672 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 19:34:30.881576   91672 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-689000 NodeName:embed-certs-689000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 19:34:30.881693   91672 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "embed-certs-689000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 19:34:30.881798   91672 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=embed-certs-689000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-689000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 19:34:30.881857   91672 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 19:34:30.890641   91672 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 19:34:30.890701   91672 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 19:34:30.898941   91672 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0108 19:34:30.913937   91672 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 19:34:30.929085   91672 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0108 19:34:30.944582   91672 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0108 19:34:30.948611   91672 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 19:34:30.958812   91672 certs.go:56] Setting up /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/embed-certs-689000 for IP: 192.168.67.2
	I0108 19:34:30.958831   91672 certs.go:190] acquiring lock for shared ca certs: {Name:mk44dcbca6ce5cf77b3bf5ce2248b699d6553e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:34:30.959009   91672 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.key
	I0108 19:34:30.959079   91672 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.key
	I0108 19:34:30.959180   91672 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/embed-certs-689000/client.key
	I0108 19:34:30.959263   91672 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/embed-certs-689000/apiserver.key.c7fa3a9e
	I0108 19:34:30.959343   91672 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/embed-certs-689000/proxy-client.key
	I0108 19:34:30.959555   91672 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/75369.pem (1338 bytes)
	W0108 19:34:30.959603   91672 certs.go:433] ignoring /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/75369_empty.pem, impossibly tiny 0 bytes
	I0108 19:34:30.959613   91672 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 19:34:30.959644   91672 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem (1078 bytes)
	I0108 19:34:30.959676   91672 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem (1123 bytes)
	I0108 19:34:30.959707   91672 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/key.pem (1679 bytes)
	I0108 19:34:30.959772   91672 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem (1708 bytes)
	I0108 19:34:30.960337   91672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/embed-certs-689000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 19:34:30.980524   91672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/embed-certs-689000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 19:34:31.001131   91672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/embed-certs-689000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 19:34:31.021630   91672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/embed-certs-689000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 19:34:31.042395   91672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 19:34:31.062840   91672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 19:34:31.083327   91672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 19:34:31.104046   91672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 19:34:31.125017   91672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 19:34:31.146422   91672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/75369.pem --> /usr/share/ca-certificates/75369.pem (1338 bytes)
	I0108 19:34:31.168686   91672 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem --> /usr/share/ca-certificates/753692.pem (1708 bytes)
	I0108 19:34:31.189886   91672 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 19:34:31.206147   91672 ssh_runner.go:195] Run: openssl version
	I0108 19:34:31.211732   91672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/753692.pem && ln -fs /usr/share/ca-certificates/753692.pem /etc/ssl/certs/753692.pem"
	I0108 19:34:31.220822   91672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/753692.pem
	I0108 19:34:31.224929   91672 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  9 02:38 /usr/share/ca-certificates/753692.pem
	I0108 19:34:31.224968   91672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/753692.pem
	I0108 19:34:31.231638   91672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/753692.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 19:34:31.240114   91672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 19:34:31.249062   91672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 19:34:31.253310   91672 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  9 02:33 /usr/share/ca-certificates/minikubeCA.pem
	I0108 19:34:31.253353   91672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 19:34:31.259929   91672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 19:34:31.268348   91672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75369.pem && ln -fs /usr/share/ca-certificates/75369.pem /etc/ssl/certs/75369.pem"
	I0108 19:34:31.277376   91672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75369.pem
	I0108 19:34:31.281475   91672 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  9 02:38 /usr/share/ca-certificates/75369.pem
	I0108 19:34:31.281519   91672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75369.pem
	I0108 19:34:31.287796   91672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/75369.pem /etc/ssl/certs/51391683.0"
	I0108 19:34:31.296141   91672 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 19:34:31.300194   91672 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 19:34:31.306467   91672 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 19:34:31.312989   91672 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 19:34:31.319271   91672 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 19:34:31.325632   91672 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 19:34:31.332028   91672 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 19:34:31.338150   91672 kubeadm.go:404] StartCluster: {Name:embed-certs-689000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-689000 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStri
ng:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 19:34:31.338263   91672 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 19:34:31.355514   91672 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 19:34:31.364115   91672 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0108 19:34:31.364131   91672 kubeadm.go:636] restartCluster start
	I0108 19:34:31.364186   91672 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 19:34:31.372063   91672 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:34:31.372161   91672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-689000
	I0108 19:34:31.423643   91672 kubeconfig.go:135] verify returned: extract IP: "embed-certs-689000" does not appear in /Users/jenkins/minikube-integration/17866-74927/kubeconfig
	I0108 19:34:31.423805   91672 kubeconfig.go:146] "embed-certs-689000" context is missing from /Users/jenkins/minikube-integration/17866-74927/kubeconfig - will repair!
	I0108 19:34:31.424121   91672 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/kubeconfig: {Name:mka56893876a255b4247f6735103824515326092 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:34:31.425744   91672 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 19:34:31.434248   91672 api_server.go:166] Checking apiserver status ...
	I0108 19:34:31.434328   91672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:34:31.443382   91672 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:34:31.935737   91672 api_server.go:166] Checking apiserver status ...
	I0108 19:34:31.935819   91672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:34:31.945350   91672 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:34:32.434960   91672 api_server.go:166] Checking apiserver status ...
	I0108 19:34:32.435135   91672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:34:32.446425   91672 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:34:32.934895   91672 api_server.go:166] Checking apiserver status ...
	I0108 19:34:32.935001   91672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:34:32.946483   91672 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:34:33.434959   91672 api_server.go:166] Checking apiserver status ...
	I0108 19:34:33.435033   91672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:34:33.444557   91672 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:34:33.936322   91672 api_server.go:166] Checking apiserver status ...
	I0108 19:34:33.936496   91672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:34:33.948034   91672 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:34:34.435402   91672 api_server.go:166] Checking apiserver status ...
	I0108 19:34:34.435596   91672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:34:34.447453   91672 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:34:34.934291   91672 api_server.go:166] Checking apiserver status ...
	I0108 19:34:34.934399   91672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:34:34.944133   91672 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:34:35.435140   91672 api_server.go:166] Checking apiserver status ...
	I0108 19:34:35.435315   91672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:34:35.446592   91672 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:34:35.934668   91672 api_server.go:166] Checking apiserver status ...
	I0108 19:34:35.934797   91672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:34:35.946366   91672 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:34:36.435024   91672 api_server.go:166] Checking apiserver status ...
	I0108 19:34:36.435102   91672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:34:36.445027   91672 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:34:36.934484   91672 api_server.go:166] Checking apiserver status ...
	I0108 19:34:36.934665   91672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:34:36.946134   91672 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:34:37.434527   91672 api_server.go:166] Checking apiserver status ...
	I0108 19:34:37.434661   91672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:34:37.446231   91672 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:34:37.934511   91672 api_server.go:166] Checking apiserver status ...
	I0108 19:34:37.934599   91672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:34:37.944039   91672 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:34:41.187505   91210 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:34:41.187748   91210 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:34:41.187769   91210 kubeadm.go:322] 
	I0108 19:34:41.187821   91210 kubeadm.go:322] Unfortunately, an error has occurred:
	I0108 19:34:41.187859   91210 kubeadm.go:322] 	timed out waiting for the condition
	I0108 19:34:41.187864   91210 kubeadm.go:322] 
	I0108 19:34:41.187920   91210 kubeadm.go:322] This error is likely caused by:
	I0108 19:34:41.187965   91210 kubeadm.go:322] 	- The kubelet is not running
	I0108 19:34:41.188083   91210 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0108 19:34:41.188098   91210 kubeadm.go:322] 
	I0108 19:34:41.188205   91210 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0108 19:34:41.188245   91210 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0108 19:34:41.188288   91210 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0108 19:34:41.188303   91210 kubeadm.go:322] 
	I0108 19:34:41.188419   91210 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0108 19:34:41.188529   91210 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0108 19:34:41.188718   91210 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0108 19:34:41.188787   91210 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0108 19:34:41.188889   91210 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0108 19:34:41.188919   91210 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0108 19:34:41.190338   91210 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0108 19:34:41.190401   91210 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0108 19:34:41.190554   91210 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0108 19:34:41.190650   91210 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 19:34:41.190730   91210 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0108 19:34:41.190794   91210 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0108 19:34:41.190883   91210 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0108 19:34:41.190946   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0108 19:34:41.617943   91210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 19:34:41.628905   91210 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0108 19:34:41.628969   91210 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 19:34:41.638305   91210 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 19:34:41.638331   91210 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 19:34:41.692220   91210 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0108 19:34:41.692258   91210 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 19:34:41.930803   91210 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 19:34:41.930889   91210 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 19:34:41.930979   91210 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 19:34:42.106443   91210 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 19:34:42.107389   91210 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 19:34:42.114012   91210 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0108 19:34:42.185438   91210 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 19:34:42.206782   91210 out.go:204]   - Generating certificates and keys ...
	I0108 19:34:42.206877   91210 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 19:34:42.206951   91210 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 19:34:42.207041   91210 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 19:34:42.207090   91210 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0108 19:34:42.207193   91210 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 19:34:42.207259   91210 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0108 19:34:42.207317   91210 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0108 19:34:42.207423   91210 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0108 19:34:42.207536   91210 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 19:34:42.207657   91210 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 19:34:42.207710   91210 kubeadm.go:322] [certs] Using the existing "sa" key
	I0108 19:34:42.207772   91210 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 19:34:42.256503   91210 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 19:34:42.366786   91210 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 19:34:42.419315   91210 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 19:34:42.528886   91210 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 19:34:42.529503   91210 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 19:34:38.436237   91672 api_server.go:166] Checking apiserver status ...
	I0108 19:34:38.436383   91672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:34:38.447799   91672 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:34:38.934927   91672 api_server.go:166] Checking apiserver status ...
	I0108 19:34:38.935100   91672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:34:38.946533   91672 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:34:39.434453   91672 api_server.go:166] Checking apiserver status ...
	I0108 19:34:39.434546   91672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:34:39.444171   91672 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:34:39.936193   91672 api_server.go:166] Checking apiserver status ...
	I0108 19:34:39.936318   91672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:34:39.948543   91672 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:34:40.434675   91672 api_server.go:166] Checking apiserver status ...
	I0108 19:34:40.434836   91672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:34:40.446503   91672 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:34:40.934232   91672 api_server.go:166] Checking apiserver status ...
	I0108 19:34:40.934301   91672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:34:40.943935   91672 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:34:41.434796   91672 api_server.go:166] Checking apiserver status ...
	I0108 19:34:41.434904   91672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:34:41.446811   91672 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:34:41.446827   91672 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0108 19:34:41.446834   91672 kubeadm.go:1135] stopping kube-system containers ...
	I0108 19:34:41.446917   91672 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 19:34:41.465855   91672 docker.go:469] Stopping containers: [1d818971ec64 2797eeb799e2 41c8f151fe08 20fb8402f85b 9e4cf4817087 9e1e9556aa77 c2ff7f9d6b9c c440447e9d56 5675a90deefc 918b991d50f6 34759054a4f8 e5b9555de1b6 8889447cd74b 3f687b46d291 a090bff2aea2]
	I0108 19:34:41.465949   91672 ssh_runner.go:195] Run: docker stop 1d818971ec64 2797eeb799e2 41c8f151fe08 20fb8402f85b 9e4cf4817087 9e1e9556aa77 c2ff7f9d6b9c c440447e9d56 5675a90deefc 918b991d50f6 34759054a4f8 e5b9555de1b6 8889447cd74b 3f687b46d291 a090bff2aea2
	I0108 19:34:41.486198   91672 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 19:34:41.497522   91672 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 19:34:41.505831   91672 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jan  9 03:32 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jan  9 03:32 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2011 Jan  9 03:33 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan  9 03:32 /etc/kubernetes/scheduler.conf
	
	I0108 19:34:41.505892   91672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0108 19:34:41.514048   91672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0108 19:34:41.522293   91672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0108 19:34:41.530280   91672 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:34:41.530338   91672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0108 19:34:41.538257   91672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0108 19:34:41.546379   91672 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:34:41.546430   91672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0108 19:34:41.554339   91672 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 19:34:41.562581   91672 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 19:34:41.562597   91672 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 19:34:41.609889   91672 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 19:34:42.045407   91672 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 19:34:42.182458   91672 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 19:34:42.248022   91672 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 19:34:42.335766   91672 api_server.go:52] waiting for apiserver process to appear ...
	I0108 19:34:42.335948   91672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:34:42.836842   91672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:34:42.550657   91210 out.go:204]   - Booting up control plane ...
	I0108 19:34:42.550729   91210 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 19:34:42.550797   91210 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 19:34:42.550862   91210 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 19:34:42.550939   91210 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 19:34:42.551054   91210 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 19:34:43.336077   91672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:34:43.350075   91672 api_server.go:72] duration metric: took 1.014340057s to wait for apiserver process to appear ...
	I0108 19:34:43.350090   91672 api_server.go:88] waiting for apiserver healthz status ...
	I0108 19:34:43.350112   91672 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:50314/healthz ...
	I0108 19:34:43.351170   91672 api_server.go:269] stopped: https://127.0.0.1:50314/healthz: Get "https://127.0.0.1:50314/healthz": EOF
	I0108 19:34:43.850500   91672 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:50314/healthz ...
	I0108 19:34:46.526312   91672 api_server.go:279] https://127.0.0.1:50314/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 19:34:46.526347   91672 api_server.go:103] status: https://127.0.0.1:50314/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 19:34:46.526363   91672 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:50314/healthz ...
	I0108 19:34:46.628026   91672 api_server.go:279] https://127.0.0.1:50314/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 19:34:46.628050   91672 api_server.go:103] status: https://127.0.0.1:50314/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 19:34:46.850128   91672 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:50314/healthz ...
	I0108 19:34:46.855746   91672 api_server.go:279] https://127.0.0.1:50314/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 19:34:46.855763   91672 api_server.go:103] status: https://127.0.0.1:50314/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 19:34:47.350142   91672 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:50314/healthz ...
	I0108 19:34:47.355982   91672 api_server.go:279] https://127.0.0.1:50314/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 19:34:47.356004   91672 api_server.go:103] status: https://127.0.0.1:50314/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 19:34:47.850128   91672 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:50314/healthz ...
	I0108 19:34:47.923374   91672 api_server.go:279] https://127.0.0.1:50314/healthz returned 200:
	ok
	I0108 19:34:47.934332   91672 api_server.go:141] control plane version: v1.28.4
	I0108 19:34:47.934385   91672 api_server.go:131] duration metric: took 4.58435855s to wait for apiserver health ...
	I0108 19:34:47.934431   91672 cni.go:84] Creating CNI manager for ""
	I0108 19:34:47.934448   91672 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0108 19:34:47.954817   91672 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 19:34:47.976653   91672 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 19:34:48.031904   91672 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 19:34:48.130970   91672 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 19:34:48.142754   91672 system_pods.go:59] 8 kube-system pods found
	I0108 19:34:48.142778   91672 system_pods.go:61] "coredns-5dd5756b68-hzx6z" [cd2eb0f5-5ae8-4d8f-94ce-27bc631d187a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 19:34:48.142784   91672 system_pods.go:61] "etcd-embed-certs-689000" [f9e8a579-73c3-4296-8cba-aea687fd56e2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 19:34:48.142795   91672 system_pods.go:61] "kube-apiserver-embed-certs-689000" [cb02b1ec-b2eb-4c8c-912e-77c1a9b7e480] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 19:34:48.142800   91672 system_pods.go:61] "kube-controller-manager-embed-certs-689000" [5d1af947-04cd-4f84-bc31-244908872e4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 19:34:48.142808   91672 system_pods.go:61] "kube-proxy-q5ftx" [bb29058b-002c-486f-9462-ade3c26d3531] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 19:34:48.142816   91672 system_pods.go:61] "kube-scheduler-embed-certs-689000" [f2e8b8d1-a055-40dd-b227-46ea6d669c13] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 19:34:48.142823   91672 system_pods.go:61] "metrics-server-57f55c9bc5-w628x" [dcd0a3be-f63a-4cd5-9a03-ca3b2dd2268c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 19:34:48.142836   91672 system_pods.go:61] "storage-provisioner" [2c115d93-3151-49ea-abc4-6305c0291965] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 19:34:48.142845   91672 system_pods.go:74] duration metric: took 11.841025ms to wait for pod list to return data ...
	I0108 19:34:48.142878   91672 node_conditions.go:102] verifying NodePressure condition ...
	I0108 19:34:48.222919   91672 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I0108 19:34:48.222959   91672 node_conditions.go:123] node cpu capacity is 12
	I0108 19:34:48.222994   91672 node_conditions.go:105] duration metric: took 80.103098ms to run NodePressure ...
	I0108 19:34:48.223033   91672 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 19:34:48.855431   91672 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0108 19:34:48.860053   91672 kubeadm.go:787] kubelet initialised
	I0108 19:34:48.860065   91672 kubeadm.go:788] duration metric: took 4.620366ms waiting for restarted kubelet to initialise ...
	I0108 19:34:48.860076   91672 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 19:34:48.865565   91672 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-hzx6z" in "kube-system" namespace to be "Ready" ...
	I0108 19:34:50.873564   91672 pod_ready.go:102] pod "coredns-5dd5756b68-hzx6z" in "kube-system" namespace has status "Ready":"False"
	I0108 19:34:52.873967   91672 pod_ready.go:102] pod "coredns-5dd5756b68-hzx6z" in "kube-system" namespace has status "Ready":"False"
	I0108 19:34:54.874751   91672 pod_ready.go:102] pod "coredns-5dd5756b68-hzx6z" in "kube-system" namespace has status "Ready":"False"
	I0108 19:34:56.374252   91672 pod_ready.go:92] pod "coredns-5dd5756b68-hzx6z" in "kube-system" namespace has status "Ready":"True"
	I0108 19:34:56.374265   91672 pod_ready.go:81] duration metric: took 7.508851436s waiting for pod "coredns-5dd5756b68-hzx6z" in "kube-system" namespace to be "Ready" ...
	I0108 19:34:56.374271   91672 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-689000" in "kube-system" namespace to be "Ready" ...
	I0108 19:34:58.381125   91672 pod_ready.go:102] pod "etcd-embed-certs-689000" in "kube-system" namespace has status "Ready":"False"
	I0108 19:35:00.381789   91672 pod_ready.go:102] pod "etcd-embed-certs-689000" in "kube-system" namespace has status "Ready":"False"
	I0108 19:35:01.882239   91672 pod_ready.go:92] pod "etcd-embed-certs-689000" in "kube-system" namespace has status "Ready":"True"
	I0108 19:35:01.882254   91672 pod_ready.go:81] duration metric: took 5.508098018s waiting for pod "etcd-embed-certs-689000" in "kube-system" namespace to be "Ready" ...
	I0108 19:35:01.882264   91672 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-689000" in "kube-system" namespace to be "Ready" ...
	I0108 19:35:01.888331   91672 pod_ready.go:92] pod "kube-apiserver-embed-certs-689000" in "kube-system" namespace has status "Ready":"True"
	I0108 19:35:01.888343   91672 pod_ready.go:81] duration metric: took 6.074575ms waiting for pod "kube-apiserver-embed-certs-689000" in "kube-system" namespace to be "Ready" ...
	I0108 19:35:01.888350   91672 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-689000" in "kube-system" namespace to be "Ready" ...
	I0108 19:35:01.893718   91672 pod_ready.go:92] pod "kube-controller-manager-embed-certs-689000" in "kube-system" namespace has status "Ready":"True"
	I0108 19:35:01.893729   91672 pod_ready.go:81] duration metric: took 5.374422ms waiting for pod "kube-controller-manager-embed-certs-689000" in "kube-system" namespace to be "Ready" ...
	I0108 19:35:01.893736   91672 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-q5ftx" in "kube-system" namespace to be "Ready" ...
	I0108 19:35:01.898506   91672 pod_ready.go:92] pod "kube-proxy-q5ftx" in "kube-system" namespace has status "Ready":"True"
	I0108 19:35:01.898516   91672 pod_ready.go:81] duration metric: took 4.774796ms waiting for pod "kube-proxy-q5ftx" in "kube-system" namespace to be "Ready" ...
	I0108 19:35:01.898522   91672 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-689000" in "kube-system" namespace to be "Ready" ...
	I0108 19:35:01.903587   91672 pod_ready.go:92] pod "kube-scheduler-embed-certs-689000" in "kube-system" namespace has status "Ready":"True"
	I0108 19:35:01.903596   91672 pod_ready.go:81] duration metric: took 5.069097ms waiting for pod "kube-scheduler-embed-certs-689000" in "kube-system" namespace to be "Ready" ...
	I0108 19:35:01.903602   91672 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace to be "Ready" ...
	I0108 19:35:03.910020   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:35:05.912839   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:35:08.410714   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:35:10.415970   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:35:12.910151   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:35:14.910732   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:35:17.409536   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:35:19.410649   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:35:21.410686   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:35:22.538567   91210 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0108 19:35:22.539507   91210 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:35:22.539746   91210 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:35:23.909762   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:35:25.911381   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:35:27.540987   91210 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:35:27.541212   91210 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:35:28.410248   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:35:30.411039   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:35:32.910109   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:35:34.910613   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:35:37.410114   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:35:37.541323   91210 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:35:37.541494   91210 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:35:39.909319   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:35:41.910367   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:35:44.409094   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:35:46.411642   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:35:48.911020   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:35:51.408985   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:35:53.410359   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:35:55.411083   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:35:57.908965   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:35:57.543243   91210 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:35:57.543438   91210 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:35:59.909554   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:36:01.911000   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:36:04.408476   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:36:06.908798   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:36:08.916620   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:36:11.408432   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:36:13.422310   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:36:15.908286   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:36:17.908405   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:36:19.908436   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:36:21.909764   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:36:24.408178   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:36:26.908872   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:36:29.409111   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:36:31.409264   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:36:37.550355   91210 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 19:36:37.550576   91210 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 19:36:37.550594   91210 kubeadm.go:322] 
	I0108 19:36:37.550649   91210 kubeadm.go:322] Unfortunately, an error has occurred:
	I0108 19:36:37.550695   91210 kubeadm.go:322] 	timed out waiting for the condition
	I0108 19:36:37.550700   91210 kubeadm.go:322] 
	I0108 19:36:37.550735   91210 kubeadm.go:322] This error is likely caused by:
	I0108 19:36:37.550793   91210 kubeadm.go:322] 	- The kubelet is not running
	I0108 19:36:37.550964   91210 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0108 19:36:37.550980   91210 kubeadm.go:322] 
	I0108 19:36:37.551115   91210 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0108 19:36:37.551157   91210 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0108 19:36:37.551203   91210 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0108 19:36:37.551220   91210 kubeadm.go:322] 
	I0108 19:36:37.551349   91210 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0108 19:36:37.551462   91210 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0108 19:36:37.551572   91210 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0108 19:36:37.551615   91210 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0108 19:36:37.551674   91210 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0108 19:36:37.551706   91210 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0108 19:36:37.552925   91210 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0108 19:36:37.552981   91210 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0108 19:36:37.553097   91210 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0108 19:36:37.553186   91210 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 19:36:37.553253   91210 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0108 19:36:37.553328   91210 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0108 19:36:37.553351   91210 kubeadm.go:406] StartCluster complete in 8m6.940239728s
	I0108 19:36:37.553443   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 19:36:37.571114   91210 logs.go:284] 0 containers: []
	W0108 19:36:37.571128   91210 logs.go:286] No container was found matching "kube-apiserver"
	I0108 19:36:37.571200   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 19:36:37.588233   91210 logs.go:284] 0 containers: []
	W0108 19:36:37.588247   91210 logs.go:286] No container was found matching "etcd"
	I0108 19:36:37.588322   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 19:36:37.606954   91210 logs.go:284] 0 containers: []
	W0108 19:36:37.606968   91210 logs.go:286] No container was found matching "coredns"
	I0108 19:36:37.607041   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 19:36:37.626517   91210 logs.go:284] 0 containers: []
	W0108 19:36:37.626530   91210 logs.go:286] No container was found matching "kube-scheduler"
	I0108 19:36:37.626601   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 19:36:37.644598   91210 logs.go:284] 0 containers: []
	W0108 19:36:37.644612   91210 logs.go:286] No container was found matching "kube-proxy"
	I0108 19:36:37.644682   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 19:36:37.664608   91210 logs.go:284] 0 containers: []
	W0108 19:36:37.664623   91210 logs.go:286] No container was found matching "kube-controller-manager"
	I0108 19:36:37.664691   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0108 19:36:37.683160   91210 logs.go:284] 0 containers: []
	W0108 19:36:37.683173   91210 logs.go:286] No container was found matching "kindnet"
	I0108 19:36:37.683241   91210 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 19:36:37.701492   91210 logs.go:284] 0 containers: []
	W0108 19:36:37.701505   91210 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0108 19:36:37.701513   91210 logs.go:123] Gathering logs for kubelet ...
	I0108 19:36:37.701519   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 19:36:37.736029   91210 logs.go:123] Gathering logs for dmesg ...
	I0108 19:36:37.736042   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 19:36:37.748656   91210 logs.go:123] Gathering logs for describe nodes ...
	I0108 19:36:37.748671   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 19:36:37.798012   91210 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 19:36:37.798025   91210 logs.go:123] Gathering logs for Docker ...
	I0108 19:36:37.798037   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0108 19:36:37.812447   91210 logs.go:123] Gathering logs for container status ...
	I0108 19:36:37.812462   91210 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0108 19:36:37.862822   91210 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0108 19:36:37.862853   91210 out.go:239] * 
	W0108 19:36:37.862890   91210 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0108 19:36:37.862905   91210 out.go:239] * 
	W0108 19:36:37.863550   91210 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 19:36:37.915444   91210 out.go:177] 
	W0108 19:36:37.957664   91210 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0108 19:36:37.957720   91210 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0108 19:36:37.957737   91210 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0108 19:36:33.909554   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:36:35.912018   91672 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w628x" in "kube-system" namespace has status "Ready":"False"
	I0108 19:36:38.020865   91210 out.go:177] 
	
	
	==> Docker <==
	Jan 09 03:28:21 old-k8s-version-901000 dockerd[723]: time="2024-01-09T03:28:21.748451198Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 09 03:28:21 old-k8s-version-901000 dockerd[723]: time="2024-01-09T03:28:21.784302399Z" level=info msg="Loading containers: done."
	Jan 09 03:28:21 old-k8s-version-901000 dockerd[723]: time="2024-01-09T03:28:21.792131546Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 09 03:28:21 old-k8s-version-901000 dockerd[723]: time="2024-01-09T03:28:21.792197106Z" level=info msg="Daemon has completed initialization"
	Jan 09 03:28:21 old-k8s-version-901000 systemd[1]: Started Docker Application Container Engine.
	Jan 09 03:28:21 old-k8s-version-901000 dockerd[723]: time="2024-01-09T03:28:21.822909958Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 09 03:28:21 old-k8s-version-901000 dockerd[723]: time="2024-01-09T03:28:21.822949228Z" level=info msg="API listen on [::]:2376"
	Jan 09 03:28:28 old-k8s-version-901000 systemd[1]: Stopping Docker Application Container Engine...
	Jan 09 03:28:28 old-k8s-version-901000 dockerd[723]: time="2024-01-09T03:28:28.864005532Z" level=info msg="Processing signal 'terminated'"
	Jan 09 03:28:28 old-k8s-version-901000 dockerd[723]: time="2024-01-09T03:28:28.864821872Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 09 03:28:28 old-k8s-version-901000 dockerd[723]: time="2024-01-09T03:28:28.865266652Z" level=info msg="Daemon shutdown complete"
	Jan 09 03:28:28 old-k8s-version-901000 dockerd[723]: time="2024-01-09T03:28:28.865552359Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 09 03:28:28 old-k8s-version-901000 systemd[1]: docker.service: Deactivated successfully.
	Jan 09 03:28:28 old-k8s-version-901000 systemd[1]: Stopped Docker Application Container Engine.
	Jan 09 03:28:28 old-k8s-version-901000 systemd[1]: Starting Docker Application Container Engine...
	Jan 09 03:28:28 old-k8s-version-901000 dockerd[949]: time="2024-01-09T03:28:28.917820203Z" level=info msg="Starting up"
	Jan 09 03:28:28 old-k8s-version-901000 dockerd[949]: time="2024-01-09T03:28:28.928976131Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 09 03:28:29 old-k8s-version-901000 dockerd[949]: time="2024-01-09T03:28:29.078619687Z" level=info msg="Loading containers: start."
	Jan 09 03:28:29 old-k8s-version-901000 dockerd[949]: time="2024-01-09T03:28:29.160686378Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 09 03:28:29 old-k8s-version-901000 dockerd[949]: time="2024-01-09T03:28:29.195694564Z" level=info msg="Loading containers: done."
	Jan 09 03:28:29 old-k8s-version-901000 dockerd[949]: time="2024-01-09T03:28:29.203622773Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 09 03:28:29 old-k8s-version-901000 dockerd[949]: time="2024-01-09T03:28:29.203680796Z" level=info msg="Daemon has completed initialization"
	Jan 09 03:28:29 old-k8s-version-901000 dockerd[949]: time="2024-01-09T03:28:29.230851290Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 09 03:28:29 old-k8s-version-901000 dockerd[949]: time="2024-01-09T03:28:29.231055167Z" level=info msg="API listen on [::]:2376"
	Jan 09 03:28:29 old-k8s-version-901000 systemd[1]: Started Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2024-01-09T03:36:39Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	
	
	==> describe nodes <==
	
	==> dmesg <==
	
	
	==> kernel <==
	 03:36:39 up  2:55,  0 users,  load average: 0.83, 0.93, 1.05
	Linux old-k8s-version-901000 6.5.11-linuxkit #1 SMP PREEMPT_DYNAMIC Mon Dec  4 10:03:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kubelet <==
	Jan 09 03:36:37 old-k8s-version-901000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 09 03:36:38 old-k8s-version-901000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 153.
	Jan 09 03:36:38 old-k8s-version-901000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 09 03:36:38 old-k8s-version-901000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 09 03:36:38 old-k8s-version-901000 kubelet[20283]: I0109 03:36:38.695289   20283 server.go:410] Version: v1.16.0
	Jan 09 03:36:38 old-k8s-version-901000 kubelet[20283]: I0109 03:36:38.695590   20283 plugins.go:100] No cloud provider specified.
	Jan 09 03:36:38 old-k8s-version-901000 kubelet[20283]: I0109 03:36:38.695601   20283 server.go:773] Client rotation is on, will bootstrap in background
	Jan 09 03:36:38 old-k8s-version-901000 kubelet[20283]: I0109 03:36:38.697193   20283 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 09 03:36:38 old-k8s-version-901000 kubelet[20283]: W0109 03:36:38.697832   20283 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 09 03:36:38 old-k8s-version-901000 kubelet[20283]: W0109 03:36:38.697892   20283 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 09 03:36:38 old-k8s-version-901000 kubelet[20283]: F0109 03:36:38.697923   20283 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 09 03:36:38 old-k8s-version-901000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 09 03:36:38 old-k8s-version-901000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 09 03:36:39 old-k8s-version-901000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 154.
	Jan 09 03:36:39 old-k8s-version-901000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 09 03:36:39 old-k8s-version-901000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 09 03:36:39 old-k8s-version-901000 kubelet[20360]: I0109 03:36:39.445062   20360 server.go:410] Version: v1.16.0
	Jan 09 03:36:39 old-k8s-version-901000 kubelet[20360]: I0109 03:36:39.445230   20360 plugins.go:100] No cloud provider specified.
	Jan 09 03:36:39 old-k8s-version-901000 kubelet[20360]: I0109 03:36:39.445239   20360 server.go:773] Client rotation is on, will bootstrap in background
	Jan 09 03:36:39 old-k8s-version-901000 kubelet[20360]: I0109 03:36:39.446891   20360 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 09 03:36:39 old-k8s-version-901000 kubelet[20360]: W0109 03:36:39.447604   20360 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 09 03:36:39 old-k8s-version-901000 kubelet[20360]: W0109 03:36:39.447662   20360 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 09 03:36:39 old-k8s-version-901000 kubelet[20360]: F0109 03:36:39.447683   20360 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 09 03:36:39 old-k8s-version-901000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 09 03:36:39 old-k8s-version-901000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 19:36:39.736797   91770 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-901000 -n old-k8s-version-901000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-901000 -n old-k8s-version-901000: exit status 2 (386.78297ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-901000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (505.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0108 19:36:41.504613   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/flannel-798000/client.crt: no such file or directory
E0108 19:36:42.287234   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/no-preload-363000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:37:02.783934   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/no-preload-363000/client.crt: no such file or directory
E0108 19:37:02.950123   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/client.crt: no such file or directory
E0108 19:37:07.646335   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/skaffold-709000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:37:25.721375   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kindnet-798000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:37:43.748954   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/no-preload-363000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:38:17.009427   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubenet-798000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:38:25.997338   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:38:48.763628   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kindnet-798000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:38:52.617502   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/calico-798000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:39:01.327562   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/custom-flannel-798000/client.crt: no such file or directory
E0108 19:39:05.667854   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/no-preload-363000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:40:15.329761   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/false-798000/client.crt: no such file or directory
E0108 19:40:15.721023   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/calico-798000/client.crt: no such file or directory
E0108 19:40:16.721885   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/enable-default-cni-798000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:40:24.154372   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0108 19:40:24.364029   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/custom-flannel-798000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:40:50.540419   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:41:21.815072   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/no-preload-363000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:41:36.946486   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/bridge-798000/client.crt: no such file or directory
E0108 19:41:38.401151   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/false-798000/client.crt: no such file or directory
E0108 19:41:39.768138   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/enable-default-cni-798000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:41:41.521959   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/flannel-798000/client.crt: no such file or directory
E0108 19:41:49.505796   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/no-preload-363000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:42:02.946551   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/client.crt: no such file or directory
E0108 19:42:07.642362   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/skaffold-709000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:42:25.715392   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kindnet-798000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:42:59.990644   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/bridge-798000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:43:04.574197   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/flannel-798000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:43:17.002486   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubenet-798000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:43:52.609791   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/calico-798000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:45:10.691460   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/skaffold-709000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:45:15.321894   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/false-798000/client.crt: no such file or directory
E0108 19:45:16.713947   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/enable-default-cni-798000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:45:24.147751   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-901000 -n old-k8s-version-901000
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-901000 -n old-k8s-version-901000: exit status 2 (383.433378ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-901000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-901000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-901000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c",
	        "Created": "2024-01-09T03:22:27.685275696Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 319627,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-09T03:28:15.901372485Z",
	            "FinishedAt": "2024-01-09T03:28:13.139361168Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c/hostname",
	        "HostsPath": "/var/lib/docker/containers/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c/hosts",
	        "LogPath": "/var/lib/docker/containers/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c-json.log",
	        "Name": "/old-k8s-version-901000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-901000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-901000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/511bcc6ca847d5924ffe005728f7ed77c8094a92053674c964ad837af718051a-init/diff:/var/lib/docker/overlay2/60277c56cb2e84cbe47fd8ed3c79b85a017889e24b19778a8fc4b14c01478988/diff",
	                "MergedDir": "/var/lib/docker/overlay2/511bcc6ca847d5924ffe005728f7ed77c8094a92053674c964ad837af718051a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/511bcc6ca847d5924ffe005728f7ed77c8094a92053674c964ad837af718051a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/511bcc6ca847d5924ffe005728f7ed77c8094a92053674c964ad837af718051a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-901000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-901000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-901000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-901000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-901000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7496a8cb9d7ae61048a11417a73137893d8a3461fad23af3b458647a8274e070",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50187"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50188"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50184"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50185"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50186"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7496a8cb9d7a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-901000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "aa25a1062c36",
	                        "old-k8s-version-901000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "876d1b6b5bcffa0a183a1c34f9924af9d72a7d63d67d2b9f07e88b4f08db4216",
	                    "EndpointID": "6b095a8900da5b41d0ebf6974e7816d2fdca4786b0382b85e41c193a78415ea4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-901000 -n old-k8s-version-901000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-901000 -n old-k8s-version-901000: exit status 2 (381.135523ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-901000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-901000 logs -n 25: (1.368393686s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-363000                                   | no-preload-363000            | jenkins | v1.32.0 | 08 Jan 24 19:26 PST | 08 Jan 24 19:32 PST |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-901000                              | old-k8s-version-901000       | jenkins | v1.32.0 | 08 Jan 24 19:28 PST | 08 Jan 24 19:28 PST |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-901000             | old-k8s-version-901000       | jenkins | v1.32.0 | 08 Jan 24 19:28 PST | 08 Jan 24 19:28 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-901000                              | old-k8s-version-901000       | jenkins | v1.32.0 | 08 Jan 24 19:28 PST |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| image   | no-preload-363000 image list                           | no-preload-363000            | jenkins | v1.32.0 | 08 Jan 24 19:32 PST | 08 Jan 24 19:32 PST |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-363000                                   | no-preload-363000            | jenkins | v1.32.0 | 08 Jan 24 19:32 PST | 08 Jan 24 19:32 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-363000                                   | no-preload-363000            | jenkins | v1.32.0 | 08 Jan 24 19:32 PST | 08 Jan 24 19:32 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-363000                                   | no-preload-363000            | jenkins | v1.32.0 | 08 Jan 24 19:32 PST | 08 Jan 24 19:32 PST |
	| delete  | -p no-preload-363000                                   | no-preload-363000            | jenkins | v1.32.0 | 08 Jan 24 19:32 PST | 08 Jan 24 19:32 PST |
	| start   | -p embed-certs-689000                                  | embed-certs-689000           | jenkins | v1.32.0 | 08 Jan 24 19:32 PST | 08 Jan 24 19:34 PST |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-689000            | embed-certs-689000           | jenkins | v1.32.0 | 08 Jan 24 19:34 PST | 08 Jan 24 19:34 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-689000                                  | embed-certs-689000           | jenkins | v1.32.0 | 08 Jan 24 19:34 PST | 08 Jan 24 19:34 PST |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-689000                 | embed-certs-689000           | jenkins | v1.32.0 | 08 Jan 24 19:34 PST | 08 Jan 24 19:34 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-689000                                  | embed-certs-689000           | jenkins | v1.32.0 | 08 Jan 24 19:34 PST | 08 Jan 24 19:43 PST |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| image   | embed-certs-689000 image list                          | embed-certs-689000           | jenkins | v1.32.0 | 08 Jan 24 19:43 PST | 08 Jan 24 19:43 PST |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-689000                                  | embed-certs-689000           | jenkins | v1.32.0 | 08 Jan 24 19:43 PST | 08 Jan 24 19:43 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-689000                                  | embed-certs-689000           | jenkins | v1.32.0 | 08 Jan 24 19:43 PST | 08 Jan 24 19:43 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-689000                                  | embed-certs-689000           | jenkins | v1.32.0 | 08 Jan 24 19:43 PST | 08 Jan 24 19:44 PST |
	| delete  | -p embed-certs-689000                                  | embed-certs-689000           | jenkins | v1.32.0 | 08 Jan 24 19:44 PST | 08 Jan 24 19:44 PST |
	| delete  | -p                                                     | disable-driver-mounts-336000 | jenkins | v1.32.0 | 08 Jan 24 19:44 PST | 08 Jan 24 19:44 PST |
	|         | disable-driver-mounts-336000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-735000 | jenkins | v1.32.0 | 08 Jan 24 19:44 PST | 08 Jan 24 19:44 PST |
	|         | default-k8s-diff-port-735000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-735000  | default-k8s-diff-port-735000 | jenkins | v1.32.0 | 08 Jan 24 19:44 PST | 08 Jan 24 19:44 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-735000 | jenkins | v1.32.0 | 08 Jan 24 19:44 PST | 08 Jan 24 19:44 PST |
	|         | default-k8s-diff-port-735000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-735000       | default-k8s-diff-port-735000 | jenkins | v1.32.0 | 08 Jan 24 19:44 PST | 08 Jan 24 19:44 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-735000 | jenkins | v1.32.0 | 08 Jan 24 19:44 PST |                     |
	|         | default-k8s-diff-port-735000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 19:44:59
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 19:44:59.550285   92154 out.go:296] Setting OutFile to fd 1 ...
	I0108 19:44:59.550504   92154 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 19:44:59.550510   92154 out.go:309] Setting ErrFile to fd 2...
	I0108 19:44:59.550514   92154 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 19:44:59.550706   92154 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17866-74927/.minikube/bin
	I0108 19:44:59.552145   92154 out.go:303] Setting JSON to false
	I0108 19:44:59.574796   92154 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":38671,"bootTime":1704733228,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0108 19:44:59.574916   92154 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0108 19:44:59.596601   92154 out.go:177] * [default-k8s-diff-port-735000] minikube v1.32.0 on Darwin 14.2.1
	I0108 19:44:59.638926   92154 notify.go:220] Checking for updates...
	I0108 19:44:59.676038   92154 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 19:44:59.734892   92154 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17866-74927/kubeconfig
	I0108 19:44:59.794062   92154 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 19:44:59.854285   92154 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 19:44:59.915115   92154 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube
	I0108 19:44:59.936162   92154 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 19:44:59.957474   92154 config.go:182] Loaded profile config "default-k8s-diff-port-735000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 19:44:59.957957   92154 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 19:45:00.015473   92154 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0108 19:45:00.015644   92154 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 19:45:00.116664   92154 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:68 SystemTime:2024-01-09 03:45:00.107231017 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0108 19:45:00.136875   92154 out.go:177] * Using the docker driver based on existing profile
	I0108 19:45:00.157674   92154 start.go:298] selected driver: docker
	I0108 19:45:00.157707   92154 start.go:902] validating driver "docker" against &{Name:default-k8s-diff-port-735000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-735000 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 19:45:00.157851   92154 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 19:45:00.162097   92154 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 19:45:00.261297   92154 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:68 SystemTime:2024-01-09 03:45:00.252202866 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0108 19:45:00.261541   92154 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 19:45:00.261573   92154 cni.go:84] Creating CNI manager for ""
	I0108 19:45:00.261586   92154 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0108 19:45:00.261594   92154 start_flags.go:321] config:
	{Name:default-k8s-diff-port-735000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-735000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 19:45:00.305730   92154 out.go:177] * Starting control plane node default-k8s-diff-port-735000 in cluster default-k8s-diff-port-735000
	I0108 19:45:00.327008   92154 cache.go:121] Beginning downloading kic base image for docker with docker
	I0108 19:45:00.349104   92154 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0108 19:45:00.390725   92154 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0108 19:45:00.390751   92154 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0108 19:45:00.390775   92154 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0108 19:45:00.390788   92154 cache.go:56] Caching tarball of preloaded images
	I0108 19:45:00.390908   92154 preload.go:174] Found /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 19:45:00.390918   92154 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0108 19:45:00.391501   92154 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/default-k8s-diff-port-735000/config.json ...
	I0108 19:45:00.443834   92154 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0108 19:45:00.443851   92154 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0108 19:45:00.443873   92154 cache.go:194] Successfully downloaded all kic artifacts
	I0108 19:45:00.443915   92154 start.go:365] acquiring machines lock for default-k8s-diff-port-735000: {Name:mk4c0d70a37b18d4ade975191640cbec9e7aef84 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 19:45:00.444011   92154 start.go:369] acquired machines lock for "default-k8s-diff-port-735000" in 74.83µs
	I0108 19:45:00.444031   92154 start.go:96] Skipping create...Using existing machine configuration
	I0108 19:45:00.444040   92154 fix.go:54] fixHost starting: 
	I0108 19:45:00.444267   92154 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-735000 --format={{.State.Status}}
	I0108 19:45:00.496104   92154 fix.go:102] recreateIfNeeded on default-k8s-diff-port-735000: state=Stopped err=<nil>
	W0108 19:45:00.496138   92154 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 19:45:00.517601   92154 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-735000" ...
	I0108 19:45:00.560425   92154 cli_runner.go:164] Run: docker start default-k8s-diff-port-735000
	I0108 19:45:00.810786   92154 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-735000 --format={{.State.Status}}
	I0108 19:45:00.864668   92154 kic.go:430] container "default-k8s-diff-port-735000" state is running.
	I0108 19:45:00.865316   92154 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-735000
	I0108 19:45:00.920637   92154 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/default-k8s-diff-port-735000/config.json ...
	I0108 19:45:00.921055   92154 machine.go:88] provisioning docker machine ...
	I0108 19:45:00.921086   92154 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-735000"
	I0108 19:45:00.921150   92154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-735000
	I0108 19:45:00.980512   92154 main.go:141] libmachine: Using SSH client type: native
	I0108 19:45:00.980898   92154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 51002 <nil> <nil>}
	I0108 19:45:00.980911   92154 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-735000 && echo "default-k8s-diff-port-735000" | sudo tee /etc/hostname
	I0108 19:45:00.982338   92154 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0108 19:45:04.129435   92154 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-735000
	
	I0108 19:45:04.129527   92154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-735000
	I0108 19:45:04.183930   92154 main.go:141] libmachine: Using SSH client type: native
	I0108 19:45:04.184228   92154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 51002 <nil> <nil>}
	I0108 19:45:04.184245   92154 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-735000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-735000/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-735000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 19:45:04.319592   92154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 19:45:04.319615   92154 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17866-74927/.minikube CaCertPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17866-74927/.minikube}
	I0108 19:45:04.319636   92154 ubuntu.go:177] setting up certificates
	I0108 19:45:04.319644   92154 provision.go:83] configureAuth start
	I0108 19:45:04.319713   92154 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-735000
	I0108 19:45:04.371111   92154 provision.go:138] copyHostCerts
	I0108 19:45:04.371241   92154 exec_runner.go:144] found /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.pem, removing ...
	I0108 19:45:04.371249   92154 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.pem
	I0108 19:45:04.371387   92154 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.pem (1078 bytes)
	I0108 19:45:04.371633   92154 exec_runner.go:144] found /Users/jenkins/minikube-integration/17866-74927/.minikube/cert.pem, removing ...
	I0108 19:45:04.371647   92154 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17866-74927/.minikube/cert.pem
	I0108 19:45:04.371729   92154 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17866-74927/.minikube/cert.pem (1123 bytes)
	I0108 19:45:04.371906   92154 exec_runner.go:144] found /Users/jenkins/minikube-integration/17866-74927/.minikube/key.pem, removing ...
	I0108 19:45:04.371912   92154 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17866-74927/.minikube/key.pem
	I0108 19:45:04.371987   92154 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17866-74927/.minikube/key.pem (1679 bytes)
	I0108 19:45:04.372134   92154 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-735000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-diff-port-735000]
	I0108 19:45:04.526347   92154 provision.go:172] copyRemoteCerts
	I0108 19:45:04.526409   92154 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 19:45:04.526461   92154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-735000
	I0108 19:45:04.578126   92154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51002 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/default-k8s-diff-port-735000/id_rsa Username:docker}
	I0108 19:45:04.672544   92154 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 19:45:04.692617   92154 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0108 19:45:04.712920   92154 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 19:45:04.733071   92154 provision.go:86] duration metric: configureAuth took 413.422464ms
	I0108 19:45:04.733085   92154 ubuntu.go:193] setting minikube options for container-runtime
	I0108 19:45:04.733234   92154 config.go:182] Loaded profile config "default-k8s-diff-port-735000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 19:45:04.733341   92154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-735000
	I0108 19:45:04.785172   92154 main.go:141] libmachine: Using SSH client type: native
	I0108 19:45:04.785487   92154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 51002 <nil> <nil>}
	I0108 19:45:04.785496   92154 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 19:45:04.918385   92154 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0108 19:45:04.918401   92154 ubuntu.go:71] root file system type: overlay
	I0108 19:45:04.918490   92154 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 19:45:04.918581   92154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-735000
	I0108 19:45:04.971450   92154 main.go:141] libmachine: Using SSH client type: native
	I0108 19:45:04.971753   92154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 51002 <nil> <nil>}
	I0108 19:45:04.971804   92154 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 19:45:05.116881   92154 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 19:45:05.116975   92154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-735000
	I0108 19:45:05.169274   92154 main.go:141] libmachine: Using SSH client type: native
	I0108 19:45:05.169569   92154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 51002 <nil> <nil>}
	I0108 19:45:05.169582   92154 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 19:45:05.309606   92154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 19:45:05.309627   92154 machine.go:91] provisioned docker machine in 4.388678195s
	I0108 19:45:05.309635   92154 start.go:300] post-start starting for "default-k8s-diff-port-735000" (driver="docker")
	I0108 19:45:05.309643   92154 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 19:45:05.309706   92154 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 19:45:05.309761   92154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-735000
	I0108 19:45:05.362333   92154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51002 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/default-k8s-diff-port-735000/id_rsa Username:docker}
	I0108 19:45:05.457937   92154 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 19:45:05.461862   92154 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 19:45:05.461884   92154 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 19:45:05.461891   92154 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 19:45:05.461896   92154 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0108 19:45:05.461908   92154 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17866-74927/.minikube/addons for local assets ...
	I0108 19:45:05.462006   92154 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17866-74927/.minikube/files for local assets ...
	I0108 19:45:05.462201   92154 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem -> 753692.pem in /etc/ssl/certs
	I0108 19:45:05.462416   92154 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 19:45:05.470401   92154 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem --> /etc/ssl/certs/753692.pem (1708 bytes)
	I0108 19:45:05.490433   92154 start.go:303] post-start completed in 180.791784ms
	I0108 19:45:05.490547   92154 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 19:45:05.490617   92154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-735000
	I0108 19:45:05.542245   92154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51002 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/default-k8s-diff-port-735000/id_rsa Username:docker}
	I0108 19:45:05.633333   92154 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 19:45:05.638373   92154 fix.go:56] fixHost completed within 5.19446541s
	I0108 19:45:05.638391   92154 start.go:83] releasing machines lock for "default-k8s-diff-port-735000", held for 5.194509624s
	I0108 19:45:05.638481   92154 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-735000
	I0108 19:45:05.692221   92154 ssh_runner.go:195] Run: cat /version.json
	I0108 19:45:05.692235   92154 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 19:45:05.692307   92154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-735000
	I0108 19:45:05.692324   92154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-735000
	I0108 19:45:05.747354   92154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51002 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/default-k8s-diff-port-735000/id_rsa Username:docker}
	I0108 19:45:05.747350   92154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51002 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/default-k8s-diff-port-735000/id_rsa Username:docker}
	I0108 19:45:05.839880   92154 ssh_runner.go:195] Run: systemctl --version
	I0108 19:45:05.948968   92154 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 19:45:05.954208   92154 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0108 19:45:05.970905   92154 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0108 19:45:05.970993   92154 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 19:45:05.979589   92154 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0108 19:45:05.979602   92154 start.go:475] detecting cgroup driver to use...
	I0108 19:45:05.979613   92154 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 19:45:05.979721   92154 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 19:45:05.994365   92154 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0108 19:45:06.003935   92154 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0108 19:45:06.013094   92154 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0108 19:45:06.013155   92154 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0108 19:45:06.022530   92154 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 19:45:06.031801   92154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0108 19:45:06.041085   92154 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 19:45:06.050933   92154 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 19:45:06.059872   92154 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0108 19:45:06.069389   92154 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 19:45:06.077658   92154 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 19:45:06.085689   92154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 19:45:06.134734   92154 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 19:45:06.206386   92154 start.go:475] detecting cgroup driver to use...
	I0108 19:45:06.206408   92154 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 19:45:06.206472   92154 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 19:45:06.228307   92154 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0108 19:45:06.228386   92154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 19:45:06.239935   92154 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 19:45:06.256123   92154 ssh_runner.go:195] Run: which cri-dockerd
	I0108 19:45:06.260812   92154 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0108 19:45:06.269411   92154 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0108 19:45:06.302121   92154 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 19:45:06.421584   92154 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 19:45:06.508740   92154 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I0108 19:45:06.508838   92154 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0108 19:45:06.524973   92154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 19:45:06.601182   92154 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 19:45:06.874289   92154 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0108 19:45:06.927344   92154 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0108 19:45:06.979568   92154 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0108 19:45:07.034720   92154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 19:45:07.085039   92154 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0108 19:45:07.109225   92154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 19:45:07.166015   92154 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0108 19:45:07.246761   92154 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0108 19:45:07.246852   92154 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0108 19:45:07.251588   92154 start.go:543] Will wait 60s for crictl version
	I0108 19:45:07.251662   92154 ssh_runner.go:195] Run: which crictl
	I0108 19:45:07.255714   92154 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 19:45:07.306216   92154 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0108 19:45:07.306329   92154 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 19:45:07.330508   92154 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 19:45:07.398202   92154 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0108 19:45:07.398362   92154 cli_runner.go:164] Run: docker exec -t default-k8s-diff-port-735000 dig +short host.docker.internal
	I0108 19:45:07.516425   92154 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0108 19:45:07.516528   92154 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0108 19:45:07.521138   92154 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 19:45:07.531738   92154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-735000
	I0108 19:45:07.583973   92154 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0108 19:45:07.584053   92154 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 19:45:07.602643   92154 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0108 19:45:07.602672   92154 docker.go:601] Images already preloaded, skipping extraction
	I0108 19:45:07.602767   92154 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 19:45:07.621826   92154 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0108 19:45:07.621852   92154 cache_images.go:84] Images are preloaded, skipping loading
	I0108 19:45:07.621936   92154 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0108 19:45:07.669696   92154 cni.go:84] Creating CNI manager for ""
	I0108 19:45:07.669712   92154 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0108 19:45:07.669728   92154 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 19:45:07.669745   92154 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-735000 NodeName:default-k8s-diff-port-735000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 19:45:07.669865   92154 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "default-k8s-diff-port-735000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 19:45:07.669934   92154 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=default-k8s-diff-port-735000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-735000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0108 19:45:07.669997   92154 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 19:45:07.678621   92154 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 19:45:07.678683   92154 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 19:45:07.686798   92154 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (389 bytes)
	I0108 19:45:07.701783   92154 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 19:45:07.717242   92154 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2111 bytes)
	I0108 19:45:07.732849   92154 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0108 19:45:07.737082   92154 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 19:45:07.747313   92154 certs.go:56] Setting up /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/default-k8s-diff-port-735000 for IP: 192.168.67.2
	I0108 19:45:07.747333   92154 certs.go:190] acquiring lock for shared ca certs: {Name:mk44dcbca6ce5cf77b3bf5ce2248b699d6553e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:45:07.747521   92154 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.key
	I0108 19:45:07.747594   92154 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.key
	I0108 19:45:07.747673   92154 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/default-k8s-diff-port-735000/client.key
	I0108 19:45:07.747753   92154 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/default-k8s-diff-port-735000/apiserver.key.c7fa3a9e
	I0108 19:45:07.747826   92154 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/default-k8s-diff-port-735000/proxy-client.key
	I0108 19:45:07.748035   92154 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/75369.pem (1338 bytes)
	W0108 19:45:07.748079   92154 certs.go:433] ignoring /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/75369_empty.pem, impossibly tiny 0 bytes
	I0108 19:45:07.748088   92154 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 19:45:07.748117   92154 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem (1078 bytes)
	I0108 19:45:07.748145   92154 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem (1123 bytes)
	I0108 19:45:07.748172   92154 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/key.pem (1679 bytes)
	I0108 19:45:07.748256   92154 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem (1708 bytes)
	I0108 19:45:07.748795   92154 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/default-k8s-diff-port-735000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 19:45:07.768914   92154 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/default-k8s-diff-port-735000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 19:45:07.789395   92154 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/default-k8s-diff-port-735000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 19:45:07.810161   92154 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/default-k8s-diff-port-735000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 19:45:07.830452   92154 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 19:45:07.850822   92154 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 19:45:07.871131   92154 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 19:45:07.891761   92154 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 19:45:07.913074   92154 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/75369.pem --> /usr/share/ca-certificates/75369.pem (1338 bytes)
	I0108 19:45:07.934829   92154 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem --> /usr/share/ca-certificates/753692.pem (1708 bytes)
	I0108 19:45:07.956285   92154 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 19:45:07.976592   92154 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 19:45:07.992813   92154 ssh_runner.go:195] Run: openssl version
	I0108 19:45:07.998323   92154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 19:45:08.007327   92154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 19:45:08.011443   92154 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  9 02:33 /usr/share/ca-certificates/minikubeCA.pem
	I0108 19:45:08.011498   92154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 19:45:08.017896   92154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 19:45:08.026426   92154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75369.pem && ln -fs /usr/share/ca-certificates/75369.pem /etc/ssl/certs/75369.pem"
	I0108 19:45:08.035514   92154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75369.pem
	I0108 19:45:08.039690   92154 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  9 02:38 /usr/share/ca-certificates/75369.pem
	I0108 19:45:08.039738   92154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75369.pem
	I0108 19:45:08.046333   92154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/75369.pem /etc/ssl/certs/51391683.0"
	I0108 19:45:08.054839   92154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/753692.pem && ln -fs /usr/share/ca-certificates/753692.pem /etc/ssl/certs/753692.pem"
	I0108 19:45:08.063805   92154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/753692.pem
	I0108 19:45:08.067930   92154 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  9 02:38 /usr/share/ca-certificates/753692.pem
	I0108 19:45:08.067973   92154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/753692.pem
	I0108 19:45:08.074694   92154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/753692.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 19:45:08.082875   92154 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 19:45:08.086909   92154 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 19:45:08.093182   92154 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 19:45:08.099382   92154 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 19:45:08.105705   92154 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 19:45:08.111988   92154 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 19:45:08.118126   92154 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 19:45:08.124427   92154 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-735000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-735000 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 19:45:08.124542   92154 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 19:45:08.142176   92154 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 19:45:08.150752   92154 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0108 19:45:08.150771   92154 kubeadm.go:636] restartCluster start
	I0108 19:45:08.150823   92154 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 19:45:08.158775   92154 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:45:08.158854   92154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-735000
	I0108 19:45:08.210766   92154 kubeconfig.go:135] verify returned: extract IP: "default-k8s-diff-port-735000" does not appear in /Users/jenkins/minikube-integration/17866-74927/kubeconfig
	I0108 19:45:08.210929   92154 kubeconfig.go:146] "default-k8s-diff-port-735000" context is missing from /Users/jenkins/minikube-integration/17866-74927/kubeconfig - will repair!
	I0108 19:45:08.211278   92154 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/kubeconfig: {Name:mka56893876a255b4247f6735103824515326092 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:45:08.212817   92154 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 19:45:08.221470   92154 api_server.go:166] Checking apiserver status ...
	I0108 19:45:08.221517   92154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:45:08.230414   92154 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:45:08.721995   92154 api_server.go:166] Checking apiserver status ...
	I0108 19:45:08.722185   92154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:45:08.733897   92154 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:45:09.221904   92154 api_server.go:166] Checking apiserver status ...
	I0108 19:45:09.222027   92154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:45:09.233408   92154 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:45:09.721665   92154 api_server.go:166] Checking apiserver status ...
	I0108 19:45:09.721789   92154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:45:09.733160   92154 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:45:10.221482   92154 api_server.go:166] Checking apiserver status ...
	I0108 19:45:10.221557   92154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:45:10.233653   92154 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:45:10.722239   92154 api_server.go:166] Checking apiserver status ...
	I0108 19:45:10.722393   92154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:45:10.733856   92154 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:45:11.222221   92154 api_server.go:166] Checking apiserver status ...
	I0108 19:45:11.222352   92154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:45:11.233855   92154 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:45:11.722414   92154 api_server.go:166] Checking apiserver status ...
	I0108 19:45:11.722523   92154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:45:11.734751   92154 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:45:12.222030   92154 api_server.go:166] Checking apiserver status ...
	I0108 19:45:12.222178   92154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:45:12.233772   92154 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:45:12.722295   92154 api_server.go:166] Checking apiserver status ...
	I0108 19:45:12.722401   92154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:45:12.734044   92154 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:45:13.221558   92154 api_server.go:166] Checking apiserver status ...
	I0108 19:45:13.221704   92154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:45:13.232931   92154 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:45:13.722072   92154 api_server.go:166] Checking apiserver status ...
	I0108 19:45:13.722182   92154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:45:13.733574   92154 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:45:14.223464   92154 api_server.go:166] Checking apiserver status ...
	I0108 19:45:14.223641   92154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:45:14.235010   92154 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:45:14.721422   92154 api_server.go:166] Checking apiserver status ...
	I0108 19:45:14.721488   92154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:45:14.731565   92154 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:45:15.223109   92154 api_server.go:166] Checking apiserver status ...
	I0108 19:45:15.223262   92154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:45:15.234624   92154 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:45:15.723472   92154 api_server.go:166] Checking apiserver status ...
	I0108 19:45:15.723572   92154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:45:15.734829   92154 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:45:16.221405   92154 api_server.go:166] Checking apiserver status ...
	I0108 19:45:16.221477   92154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:45:16.231807   92154 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:45:16.721473   92154 api_server.go:166] Checking apiserver status ...
	I0108 19:45:16.721604   92154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:45:16.733243   92154 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:45:17.222094   92154 api_server.go:166] Checking apiserver status ...
	I0108 19:45:17.222225   92154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:45:17.233841   92154 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:45:17.721439   92154 api_server.go:166] Checking apiserver status ...
	I0108 19:45:17.721532   92154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:45:17.732552   92154 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:45:18.223364   92154 api_server.go:166] Checking apiserver status ...
	I0108 19:45:18.223469   92154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:45:18.234429   92154 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:45:18.234443   92154 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0108 19:45:18.234449   92154 kubeadm.go:1135] stopping kube-system containers ...
	I0108 19:45:18.234518   92154 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 19:45:18.252986   92154 docker.go:469] Stopping containers: [496348c501d8 4fe8534522ef cb5eec2fc05b e4c516717328 b364731274da 8f49ebd9cd84 0f0f2ad334dc e185fc533f73 fee6c5dac59d d0fed64b26e1 c558c891d5a2 00a048bce556 cd91eb6a74e9 e1c8d1c8a268 497a646d253d]
	I0108 19:45:18.253068   92154 ssh_runner.go:195] Run: docker stop 496348c501d8 4fe8534522ef cb5eec2fc05b e4c516717328 b364731274da 8f49ebd9cd84 0f0f2ad334dc e185fc533f73 fee6c5dac59d d0fed64b26e1 c558c891d5a2 00a048bce556 cd91eb6a74e9 e1c8d1c8a268 497a646d253d
	I0108 19:45:18.280427   92154 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 19:45:18.291708   92154 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 19:45:18.299898   92154 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jan  9 03:44 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jan  9 03:44 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2051 Jan  9 03:44 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jan  9 03:44 /etc/kubernetes/scheduler.conf
	
	I0108 19:45:18.299984   92154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0108 19:45:18.308625   92154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0108 19:45:18.316816   92154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0108 19:45:18.324906   92154 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:45:18.324985   92154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0108 19:45:18.333184   92154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0108 19:45:18.341352   92154 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:45:18.341409   92154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0108 19:45:18.349353   92154 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 19:45:18.357752   92154 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 19:45:18.357764   92154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 19:45:18.405641   92154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 19:45:19.024383   92154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 19:45:19.151068   92154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 19:45:19.205626   92154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 19:45:19.253761   92154 api_server.go:52] waiting for apiserver process to appear ...
	I0108 19:45:19.253853   92154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:45:19.753934   92154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:45:20.254194   92154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:45:20.306200   92154 api_server.go:72] duration metric: took 1.052493259s to wait for apiserver process to appear ...
	I0108 19:45:20.306224   92154 api_server.go:88] waiting for apiserver healthz status ...
	I0108 19:45:20.306248   92154 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51001/healthz ...
	I0108 19:45:20.307399   92154 api_server.go:269] stopped: https://127.0.0.1:51001/healthz: Get "https://127.0.0.1:51001/healthz": EOF
	I0108 19:45:20.806659   92154 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51001/healthz ...
	I0108 19:45:22.997894   92154 api_server.go:279] https://127.0.0.1:51001/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 19:45:22.997916   92154 api_server.go:103] status: https://127.0.0.1:51001/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 19:45:22.997928   92154 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51001/healthz ...
	I0108 19:45:23.095308   92154 api_server.go:279] https://127.0.0.1:51001/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 19:45:23.095333   92154 api_server.go:103] status: https://127.0.0.1:51001/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 19:45:23.306329   92154 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51001/healthz ...
	I0108 19:45:23.313740   92154 api_server.go:279] https://127.0.0.1:51001/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 19:45:23.313796   92154 api_server.go:103] status: https://127.0.0.1:51001/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 19:45:23.806869   92154 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51001/healthz ...
	I0108 19:45:23.814343   92154 api_server.go:279] https://127.0.0.1:51001/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 19:45:23.814401   92154 api_server.go:103] status: https://127.0.0.1:51001/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 19:45:24.306339   92154 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51001/healthz ...
	I0108 19:45:24.314771   92154 api_server.go:279] https://127.0.0.1:51001/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 19:45:24.314796   92154 api_server.go:103] status: https://127.0.0.1:51001/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 19:45:24.806803   92154 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51001/healthz ...
	I0108 19:45:24.813944   92154 api_server.go:279] https://127.0.0.1:51001/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 19:45:24.813973   92154 api_server.go:103] status: https://127.0.0.1:51001/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 19:45:25.306319   92154 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51001/healthz ...
	I0108 19:45:25.312892   92154 api_server.go:279] https://127.0.0.1:51001/healthz returned 200:
	ok
	I0108 19:45:25.320857   92154 api_server.go:141] control plane version: v1.28.4
	I0108 19:45:25.320873   92154 api_server.go:131] duration metric: took 5.014775905s to wait for apiserver health ...
	I0108 19:45:25.320879   92154 cni.go:84] Creating CNI manager for ""
	I0108 19:45:25.320889   92154 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0108 19:45:25.342884   92154 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 19:45:25.363969   92154 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 19:45:25.373278   92154 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 19:45:25.388640   92154 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 19:45:25.396008   92154 system_pods.go:59] 8 kube-system pods found
	I0108 19:45:25.396030   92154 system_pods.go:61] "coredns-5dd5756b68-6rwwd" [cc03eb6d-1726-405f-b35b-d16ac6c98d5f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 19:45:25.396043   92154 system_pods.go:61] "etcd-default-k8s-diff-port-735000" [584f78a4-46bb-41c5-94c3-8f8ebc5bc67e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 19:45:25.396049   92154 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-735000" [35f62322-4b2b-4eee-ac2d-c5cc6f98af70] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 19:45:25.396059   92154 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-735000" [1de241e3-e600-440d-9e9a-09e6d2e6124a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 19:45:25.396065   92154 system_pods.go:61] "kube-proxy-ccsbq" [22377bd1-e700-4ae3-babb-3c4a76383e52] Running
	I0108 19:45:25.396073   92154 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-735000" [728a81ed-f1ea-4d2f-b90a-e5dd302e25f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 19:45:25.396078   92154 system_pods.go:61] "metrics-server-57f55c9bc5-28dnp" [a9cd6f52-5e58-424d-9d4e-e84c445a64de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 19:45:25.396083   92154 system_pods.go:61] "storage-provisioner" [ba80054a-d2c7-4713-8bc3-01e12d6c8061] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 19:45:25.396088   92154 system_pods.go:74] duration metric: took 7.436921ms to wait for pod list to return data ...
	I0108 19:45:25.396095   92154 node_conditions.go:102] verifying NodePressure condition ...
	I0108 19:45:25.399266   92154 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I0108 19:45:25.399279   92154 node_conditions.go:123] node cpu capacity is 12
	I0108 19:45:25.399289   92154 node_conditions.go:105] duration metric: took 3.190845ms to run NodePressure ...
	I0108 19:45:25.399301   92154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 19:45:25.527331   92154 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0108 19:45:25.531476   92154 kubeadm.go:787] kubelet initialised
	I0108 19:45:25.531488   92154 kubeadm.go:788] duration metric: took 4.14292ms waiting for restarted kubelet to initialise ...
	I0108 19:45:25.531496   92154 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 19:45:25.537036   92154 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6rwwd" in "kube-system" namespace to be "Ready" ...
	I0108 19:45:27.543611   92154 pod_ready.go:102] pod "coredns-5dd5756b68-6rwwd" in "kube-system" namespace has status "Ready":"False"
	I0108 19:45:29.543929   92154 pod_ready.go:102] pod "coredns-5dd5756b68-6rwwd" in "kube-system" namespace has status "Ready":"False"
	I0108 19:45:31.545917   92154 pod_ready.go:102] pod "coredns-5dd5756b68-6rwwd" in "kube-system" namespace has status "Ready":"False"
	I0108 19:45:34.044369   92154 pod_ready.go:102] pod "coredns-5dd5756b68-6rwwd" in "kube-system" namespace has status "Ready":"False"
	I0108 19:45:36.044401   92154 pod_ready.go:102] pod "coredns-5dd5756b68-6rwwd" in "kube-system" namespace has status "Ready":"False"
	I0108 19:45:38.543809   92154 pod_ready.go:102] pod "coredns-5dd5756b68-6rwwd" in "kube-system" namespace has status "Ready":"False"
	
	
	==> Docker <==
	Jan 09 03:28:21 old-k8s-version-901000 dockerd[723]: time="2024-01-09T03:28:21.748451198Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 09 03:28:21 old-k8s-version-901000 dockerd[723]: time="2024-01-09T03:28:21.784302399Z" level=info msg="Loading containers: done."
	Jan 09 03:28:21 old-k8s-version-901000 dockerd[723]: time="2024-01-09T03:28:21.792131546Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 09 03:28:21 old-k8s-version-901000 dockerd[723]: time="2024-01-09T03:28:21.792197106Z" level=info msg="Daemon has completed initialization"
	Jan 09 03:28:21 old-k8s-version-901000 systemd[1]: Started Docker Application Container Engine.
	Jan 09 03:28:21 old-k8s-version-901000 dockerd[723]: time="2024-01-09T03:28:21.822909958Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 09 03:28:21 old-k8s-version-901000 dockerd[723]: time="2024-01-09T03:28:21.822949228Z" level=info msg="API listen on [::]:2376"
	Jan 09 03:28:28 old-k8s-version-901000 systemd[1]: Stopping Docker Application Container Engine...
	Jan 09 03:28:28 old-k8s-version-901000 dockerd[723]: time="2024-01-09T03:28:28.864005532Z" level=info msg="Processing signal 'terminated'"
	Jan 09 03:28:28 old-k8s-version-901000 dockerd[723]: time="2024-01-09T03:28:28.864821872Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 09 03:28:28 old-k8s-version-901000 dockerd[723]: time="2024-01-09T03:28:28.865266652Z" level=info msg="Daemon shutdown complete"
	Jan 09 03:28:28 old-k8s-version-901000 dockerd[723]: time="2024-01-09T03:28:28.865552359Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 09 03:28:28 old-k8s-version-901000 systemd[1]: docker.service: Deactivated successfully.
	Jan 09 03:28:28 old-k8s-version-901000 systemd[1]: Stopped Docker Application Container Engine.
	Jan 09 03:28:28 old-k8s-version-901000 systemd[1]: Starting Docker Application Container Engine...
	Jan 09 03:28:28 old-k8s-version-901000 dockerd[949]: time="2024-01-09T03:28:28.917820203Z" level=info msg="Starting up"
	Jan 09 03:28:28 old-k8s-version-901000 dockerd[949]: time="2024-01-09T03:28:28.928976131Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 09 03:28:29 old-k8s-version-901000 dockerd[949]: time="2024-01-09T03:28:29.078619687Z" level=info msg="Loading containers: start."
	Jan 09 03:28:29 old-k8s-version-901000 dockerd[949]: time="2024-01-09T03:28:29.160686378Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 09 03:28:29 old-k8s-version-901000 dockerd[949]: time="2024-01-09T03:28:29.195694564Z" level=info msg="Loading containers: done."
	Jan 09 03:28:29 old-k8s-version-901000 dockerd[949]: time="2024-01-09T03:28:29.203622773Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 09 03:28:29 old-k8s-version-901000 dockerd[949]: time="2024-01-09T03:28:29.203680796Z" level=info msg="Daemon has completed initialization"
	Jan 09 03:28:29 old-k8s-version-901000 dockerd[949]: time="2024-01-09T03:28:29.230851290Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 09 03:28:29 old-k8s-version-901000 dockerd[949]: time="2024-01-09T03:28:29.231055167Z" level=info msg="API listen on [::]:2376"
	Jan 09 03:28:29 old-k8s-version-901000 systemd[1]: Started Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2024-01-09T03:45:42Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	
	
	==> describe nodes <==
	
	==> dmesg <==
	
	
	==> kernel <==
	 03:45:43 up  3:04,  0 users,  load average: 0.28, 0.68, 0.92
	Linux old-k8s-version-901000 6.5.11-linuxkit #1 SMP PREEMPT_DYNAMIC Mon Dec  4 10:03:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kubelet <==
	Jan 09 03:45:41 old-k8s-version-901000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 09 03:45:41 old-k8s-version-901000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 862.
	Jan 09 03:45:41 old-k8s-version-901000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 09 03:45:41 old-k8s-version-901000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 09 03:45:41 old-k8s-version-901000 kubelet[33112]: I0109 03:45:41.974710   33112 server.go:410] Version: v1.16.0
	Jan 09 03:45:41 old-k8s-version-901000 kubelet[33112]: I0109 03:45:41.974961   33112 plugins.go:100] No cloud provider specified.
	Jan 09 03:45:41 old-k8s-version-901000 kubelet[33112]: I0109 03:45:41.974972   33112 server.go:773] Client rotation is on, will bootstrap in background
	Jan 09 03:45:41 old-k8s-version-901000 kubelet[33112]: I0109 03:45:41.976792   33112 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 09 03:45:41 old-k8s-version-901000 kubelet[33112]: W0109 03:45:41.977854   33112 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 09 03:45:41 old-k8s-version-901000 kubelet[33112]: W0109 03:45:41.978027   33112 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 09 03:45:41 old-k8s-version-901000 kubelet[33112]: F0109 03:45:41.978060   33112 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 09 03:45:41 old-k8s-version-901000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 09 03:45:41 old-k8s-version-901000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 09 03:45:42 old-k8s-version-901000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 863.
	Jan 09 03:45:42 old-k8s-version-901000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 09 03:45:42 old-k8s-version-901000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 09 03:45:42 old-k8s-version-901000 kubelet[33202]: I0109 03:45:42.739443   33202 server.go:410] Version: v1.16.0
	Jan 09 03:45:42 old-k8s-version-901000 kubelet[33202]: I0109 03:45:42.739681   33202 plugins.go:100] No cloud provider specified.
	Jan 09 03:45:42 old-k8s-version-901000 kubelet[33202]: I0109 03:45:42.739692   33202 server.go:773] Client rotation is on, will bootstrap in background
	Jan 09 03:45:42 old-k8s-version-901000 kubelet[33202]: I0109 03:45:42.750373   33202 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 09 03:45:42 old-k8s-version-901000 kubelet[33202]: W0109 03:45:42.751023   33202 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 09 03:45:42 old-k8s-version-901000 kubelet[33202]: W0109 03:45:42.751083   33202 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 09 03:45:42 old-k8s-version-901000 kubelet[33202]: F0109 03:45:42.751104   33202 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 09 03:45:42 old-k8s-version-901000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 09 03:45:42 old-k8s-version-901000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 19:45:42.870527   92246 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-901000 -n old-k8s-version-901000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-901000 -n old-k8s-version-901000: exit status 2 (387.155123ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-901000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (402.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0108 19:45:50.532325   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:46:21.807768   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/no-preload-363000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:46:36.940502   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/bridge-798000/client.crt: no such file or directory
E0108 19:46:41.513003   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/flannel-798000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:46:47.199702   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:47:02.938645   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:47:07.634531   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/skaffold-709000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:47:25.707677   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kindnet-798000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:48:16.993465   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubenet-798000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:48:52.601547   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/calico-798000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:49:01.310098   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/custom-flannel-798000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:50:15.312068   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/false-798000/client.crt: no such file or directory
E0108 19:50:16.705790   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/enable-default-cni-798000/client.crt: no such file or directory
E0108 19:50:24.138741   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:51:21.799667   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/no-preload-363000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:51:36.930663   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/bridge-798000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:52:02.930694   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 19:52:07.626983   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/skaffold-709000/client.crt: no such file or directory
E0108 19:52:13.579074   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50186/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-901000 -n old-k8s-version-901000
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-901000 -n old-k8s-version-901000: exit status 2 (404.070691ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-901000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-901000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-901000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (6.966µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-901000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-901000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-901000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c",
	        "Created": "2024-01-09T03:22:27.685275696Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 319627,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-09T03:28:15.901372485Z",
	            "FinishedAt": "2024-01-09T03:28:13.139361168Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c/hostname",
	        "HostsPath": "/var/lib/docker/containers/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c/hosts",
	        "LogPath": "/var/lib/docker/containers/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c/aa25a1062c3637249ed98695e91a47377ce4dea8ff0830e7d303498491e8835c-json.log",
	        "Name": "/old-k8s-version-901000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-901000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-901000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/511bcc6ca847d5924ffe005728f7ed77c8094a92053674c964ad837af718051a-init/diff:/var/lib/docker/overlay2/60277c56cb2e84cbe47fd8ed3c79b85a017889e24b19778a8fc4b14c01478988/diff",
	                "MergedDir": "/var/lib/docker/overlay2/511bcc6ca847d5924ffe005728f7ed77c8094a92053674c964ad837af718051a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/511bcc6ca847d5924ffe005728f7ed77c8094a92053674c964ad837af718051a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/511bcc6ca847d5924ffe005728f7ed77c8094a92053674c964ad837af718051a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-901000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-901000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-901000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-901000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-901000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7496a8cb9d7ae61048a11417a73137893d8a3461fad23af3b458647a8274e070",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50187"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50188"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50184"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50185"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50186"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7496a8cb9d7a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-901000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "aa25a1062c36",
	                        "old-k8s-version-901000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "876d1b6b5bcffa0a183a1c34f9924af9d72a7d63d67d2b9f07e88b4f08db4216",
	                    "EndpointID": "6b095a8900da5b41d0ebf6974e7816d2fdca4786b0382b85e41c193a78415ea4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-901000 -n old-k8s-version-901000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-901000 -n old-k8s-version-901000: exit status 2 (382.751486ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-901000 logs -n 25
E0108 19:52:25.698991   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kindnet-798000/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-901000 logs -n 25: (1.365979851s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p embed-certs-689000                                  | embed-certs-689000           | jenkins | v1.32.0 | 08 Jan 24 19:43 PST | 08 Jan 24 19:43 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-689000                                  | embed-certs-689000           | jenkins | v1.32.0 | 08 Jan 24 19:43 PST | 08 Jan 24 19:43 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-689000                                  | embed-certs-689000           | jenkins | v1.32.0 | 08 Jan 24 19:43 PST | 08 Jan 24 19:44 PST |
	| delete  | -p embed-certs-689000                                  | embed-certs-689000           | jenkins | v1.32.0 | 08 Jan 24 19:44 PST | 08 Jan 24 19:44 PST |
	| delete  | -p                                                     | disable-driver-mounts-336000 | jenkins | v1.32.0 | 08 Jan 24 19:44 PST | 08 Jan 24 19:44 PST |
	|         | disable-driver-mounts-336000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-735000 | jenkins | v1.32.0 | 08 Jan 24 19:44 PST | 08 Jan 24 19:44 PST |
	|         | default-k8s-diff-port-735000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-735000  | default-k8s-diff-port-735000 | jenkins | v1.32.0 | 08 Jan 24 19:44 PST | 08 Jan 24 19:44 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-735000 | jenkins | v1.32.0 | 08 Jan 24 19:44 PST | 08 Jan 24 19:44 PST |
	|         | default-k8s-diff-port-735000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-735000       | default-k8s-diff-port-735000 | jenkins | v1.32.0 | 08 Jan 24 19:44 PST | 08 Jan 24 19:44 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-735000 | jenkins | v1.32.0 | 08 Jan 24 19:44 PST | 08 Jan 24 19:50 PST |
	|         | default-k8s-diff-port-735000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-735000                           | default-k8s-diff-port-735000 | jenkins | v1.32.0 | 08 Jan 24 19:50 PST | 08 Jan 24 19:50 PST |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-735000 | jenkins | v1.32.0 | 08 Jan 24 19:50 PST | 08 Jan 24 19:50 PST |
	|         | default-k8s-diff-port-735000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-735000 | jenkins | v1.32.0 | 08 Jan 24 19:50 PST | 08 Jan 24 19:50 PST |
	|         | default-k8s-diff-port-735000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-735000 | jenkins | v1.32.0 | 08 Jan 24 19:51 PST | 08 Jan 24 19:51 PST |
	|         | default-k8s-diff-port-735000                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-735000 | jenkins | v1.32.0 | 08 Jan 24 19:51 PST | 08 Jan 24 19:51 PST |
	|         | default-k8s-diff-port-735000                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-103000 --memory=2200 --alsologtostderr   | newest-cni-103000            | jenkins | v1.32.0 | 08 Jan 24 19:51 PST | 08 Jan 24 19:51 PST |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.29.0-rc.2     |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-103000             | newest-cni-103000            | jenkins | v1.32.0 | 08 Jan 24 19:51 PST | 08 Jan 24 19:51 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-103000                                   | newest-cni-103000            | jenkins | v1.32.0 | 08 Jan 24 19:51 PST | 08 Jan 24 19:51 PST |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-103000                  | newest-cni-103000            | jenkins | v1.32.0 | 08 Jan 24 19:51 PST | 08 Jan 24 19:51 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-103000 --memory=2200 --alsologtostderr   | newest-cni-103000            | jenkins | v1.32.0 | 08 Jan 24 19:51 PST | 08 Jan 24 19:52 PST |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.29.0-rc.2     |                              |         |         |                     |                     |
	| image   | newest-cni-103000 image list                           | newest-cni-103000            | jenkins | v1.32.0 | 08 Jan 24 19:52 PST | 08 Jan 24 19:52 PST |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-103000                                   | newest-cni-103000            | jenkins | v1.32.0 | 08 Jan 24 19:52 PST | 08 Jan 24 19:52 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-103000                                   | newest-cni-103000            | jenkins | v1.32.0 | 08 Jan 24 19:52 PST | 08 Jan 24 19:52 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-103000                                   | newest-cni-103000            | jenkins | v1.32.0 | 08 Jan 24 19:52 PST | 08 Jan 24 19:52 PST |
	| delete  | -p newest-cni-103000                                   | newest-cni-103000            | jenkins | v1.32.0 | 08 Jan 24 19:52 PST | 08 Jan 24 19:52 PST |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 19:51:49
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 19:51:49.724249   92604 out.go:296] Setting OutFile to fd 1 ...
	I0108 19:51:49.724458   92604 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 19:51:49.724464   92604 out.go:309] Setting ErrFile to fd 2...
	I0108 19:51:49.724468   92604 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 19:51:49.724671   92604 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17866-74927/.minikube/bin
	I0108 19:51:49.726173   92604 out.go:303] Setting JSON to false
	I0108 19:51:49.749161   92604 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":39081,"bootTime":1704733228,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0108 19:51:49.749265   92604 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0108 19:51:49.770416   92604 out.go:177] * [newest-cni-103000] minikube v1.32.0 on Darwin 14.2.1
	I0108 19:51:49.812051   92604 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 19:51:49.812127   92604 notify.go:220] Checking for updates...
	I0108 19:51:49.855153   92604 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17866-74927/kubeconfig
	I0108 19:51:49.875845   92604 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 19:51:49.897016   92604 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 19:51:49.918242   92604 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube
	I0108 19:51:49.938980   92604 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 19:51:49.960994   92604 config.go:182] Loaded profile config "newest-cni-103000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0108 19:51:49.961779   92604 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 19:51:50.018760   92604 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0108 19:51:50.018925   92604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 19:51:50.123466   92604 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:68 SystemTime:2024-01-09 03:51:50.113033731 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0108 19:51:50.165223   92604 out.go:177] * Using the docker driver based on existing profile
	I0108 19:51:50.186671   92604 start.go:298] selected driver: docker
	I0108 19:51:50.186700   92604 start.go:902] validating driver "docker" against &{Name:newest-cni-103000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-103000 Namespace:default APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Liste
nAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 19:51:50.186824   92604 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 19:51:50.191268   92604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 19:51:50.294279   92604 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:68 SystemTime:2024-01-09 03:51:50.284534458 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0108 19:51:50.294524   92604 start_flags.go:946] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0108 19:51:50.294585   92604 cni.go:84] Creating CNI manager for ""
	I0108 19:51:50.294597   92604 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0108 19:51:50.294609   92604 start_flags.go:321] config:
	{Name:newest-cni-103000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-103000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 19:51:50.337522   92604 out.go:177] * Starting control plane node newest-cni-103000 in cluster newest-cni-103000
	I0108 19:51:50.358477   92604 cache.go:121] Beginning downloading kic base image for docker with docker
	I0108 19:51:50.379535   92604 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0108 19:51:50.400242   92604 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0108 19:51:50.400265   92604 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0108 19:51:50.400303   92604 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0108 19:51:50.400316   92604 cache.go:56] Caching tarball of preloaded images
	I0108 19:51:50.400417   92604 preload.go:174] Found /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 19:51:50.400428   92604 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0108 19:51:50.400864   92604 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/newest-cni-103000/config.json ...
	I0108 19:51:50.457546   92604 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0108 19:51:50.457559   92604 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0108 19:51:50.457583   92604 cache.go:194] Successfully downloaded all kic artifacts
	I0108 19:51:50.457622   92604 start.go:365] acquiring machines lock for newest-cni-103000: {Name:mkc821043caeccfb3e096356047a7c326cf830ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 19:51:50.457701   92604 start.go:369] acquired machines lock for "newest-cni-103000" in 58.174µs
	I0108 19:51:50.457723   92604 start.go:96] Skipping create...Using existing machine configuration
	I0108 19:51:50.457731   92604 fix.go:54] fixHost starting: 
	I0108 19:51:50.457971   92604 cli_runner.go:164] Run: docker container inspect newest-cni-103000 --format={{.State.Status}}
	I0108 19:51:50.510940   92604 fix.go:102] recreateIfNeeded on newest-cni-103000: state=Stopped err=<nil>
	W0108 19:51:50.510979   92604 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 19:51:50.532444   92604 out.go:177] * Restarting existing docker container for "newest-cni-103000" ...
	I0108 19:51:50.553190   92604 cli_runner.go:164] Run: docker start newest-cni-103000
	I0108 19:51:50.800596   92604 cli_runner.go:164] Run: docker container inspect newest-cni-103000 --format={{.State.Status}}
	I0108 19:51:50.855027   92604 kic.go:430] container "newest-cni-103000" state is running.
	I0108 19:51:50.855632   92604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-103000
	I0108 19:51:50.911065   92604 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/newest-cni-103000/config.json ...
	I0108 19:51:50.911483   92604 machine.go:88] provisioning docker machine ...
	I0108 19:51:50.911508   92604 ubuntu.go:169] provisioning hostname "newest-cni-103000"
	I0108 19:51:50.911609   92604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-103000
	I0108 19:51:50.970972   92604 main.go:141] libmachine: Using SSH client type: native
	I0108 19:51:50.971550   92604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 51580 <nil> <nil>}
	I0108 19:51:50.971573   92604 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-103000 && echo "newest-cni-103000" | sudo tee /etc/hostname
	I0108 19:51:50.973113   92604 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0108 19:51:54.119804   92604 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-103000
	
	I0108 19:51:54.119900   92604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-103000
	I0108 19:51:54.171049   92604 main.go:141] libmachine: Using SSH client type: native
	I0108 19:51:54.171344   92604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 51580 <nil> <nil>}
	I0108 19:51:54.171358   92604 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-103000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-103000/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-103000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 19:51:54.305321   92604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 19:51:54.305347   92604 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17866-74927/.minikube CaCertPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17866-74927/.minikube}
	I0108 19:51:54.305368   92604 ubuntu.go:177] setting up certificates
	I0108 19:51:54.305381   92604 provision.go:83] configureAuth start
	I0108 19:51:54.305462   92604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-103000
	I0108 19:51:54.356511   92604 provision.go:138] copyHostCerts
	I0108 19:51:54.356608   92604 exec_runner.go:144] found /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.pem, removing ...
	I0108 19:51:54.356619   92604 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.pem
	I0108 19:51:54.356756   92604 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.pem (1078 bytes)
	I0108 19:51:54.357032   92604 exec_runner.go:144] found /Users/jenkins/minikube-integration/17866-74927/.minikube/cert.pem, removing ...
	I0108 19:51:54.357040   92604 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17866-74927/.minikube/cert.pem
	I0108 19:51:54.357111   92604 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17866-74927/.minikube/cert.pem (1123 bytes)
	I0108 19:51:54.357286   92604 exec_runner.go:144] found /Users/jenkins/minikube-integration/17866-74927/.minikube/key.pem, removing ...
	I0108 19:51:54.357297   92604 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17866-74927/.minikube/key.pem
	I0108 19:51:54.357360   92604 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17866-74927/.minikube/key.pem (1679 bytes)
	I0108 19:51:54.357535   92604 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca-key.pem org=jenkins.newest-cni-103000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-103000]
	I0108 19:51:54.467312   92604 provision.go:172] copyRemoteCerts
	I0108 19:51:54.467377   92604 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 19:51:54.467436   92604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-103000
	I0108 19:51:54.522614   92604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51580 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/newest-cni-103000/id_rsa Username:docker}
	I0108 19:51:54.617291   92604 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0108 19:51:54.638853   92604 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 19:51:54.659379   92604 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 19:51:54.679924   92604 provision.go:86] duration metric: configureAuth took 374.537091ms
	I0108 19:51:54.679944   92604 ubuntu.go:193] setting minikube options for container-runtime
	I0108 19:51:54.680094   92604 config.go:182] Loaded profile config "newest-cni-103000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0108 19:51:54.680160   92604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-103000
	I0108 19:51:54.752219   92604 main.go:141] libmachine: Using SSH client type: native
	I0108 19:51:54.752501   92604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 51580 <nil> <nil>}
	I0108 19:51:54.752510   92604 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 19:51:54.886918   92604 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0108 19:51:54.886933   92604 ubuntu.go:71] root file system type: overlay
	I0108 19:51:54.887018   92604 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 19:51:54.887105   92604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-103000
	I0108 19:51:54.938297   92604 main.go:141] libmachine: Using SSH client type: native
	I0108 19:51:54.938587   92604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 51580 <nil> <nil>}
	I0108 19:51:54.938639   92604 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 19:51:55.083570   92604 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 19:51:55.083666   92604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-103000
	I0108 19:51:55.135696   92604 main.go:141] libmachine: Using SSH client type: native
	I0108 19:51:55.136008   92604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil>  [] 0s} 127.0.0.1 51580 <nil> <nil>}
	I0108 19:51:55.136024   92604 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 19:51:55.274493   92604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 19:51:55.274511   92604 machine.go:91] provisioned docker machine in 4.363135458s
	I0108 19:51:55.274518   92604 start.go:300] post-start starting for "newest-cni-103000" (driver="docker")
	I0108 19:51:55.274537   92604 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 19:51:55.274611   92604 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 19:51:55.274676   92604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-103000
	I0108 19:51:55.326588   92604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51580 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/newest-cni-103000/id_rsa Username:docker}
	I0108 19:51:55.422523   92604 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 19:51:55.426773   92604 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 19:51:55.426798   92604 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 19:51:55.426807   92604 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 19:51:55.426812   92604 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0108 19:51:55.426823   92604 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17866-74927/.minikube/addons for local assets ...
	I0108 19:51:55.426912   92604 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17866-74927/.minikube/files for local assets ...
	I0108 19:51:55.427119   92604 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem -> 753692.pem in /etc/ssl/certs
	I0108 19:51:55.427400   92604 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 19:51:55.436395   92604 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem --> /etc/ssl/certs/753692.pem (1708 bytes)
	I0108 19:51:55.457375   92604 start.go:303] post-start completed in 182.845283ms
	I0108 19:51:55.457478   92604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 19:51:55.457547   92604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-103000
	I0108 19:51:55.510162   92604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51580 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/newest-cni-103000/id_rsa Username:docker}
	I0108 19:51:55.603104   92604 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 19:51:55.607835   92604 fix.go:56] fixHost completed within 5.15024003s
	I0108 19:51:55.607849   92604 start.go:83] releasing machines lock for "newest-cni-103000", held for 5.150276563s
	I0108 19:51:55.607930   92604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-103000
	I0108 19:51:55.658784   92604 ssh_runner.go:195] Run: cat /version.json
	I0108 19:51:55.658797   92604 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 19:51:55.658866   92604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-103000
	I0108 19:51:55.658871   92604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-103000
	I0108 19:51:55.712416   92604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51580 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/newest-cni-103000/id_rsa Username:docker}
	I0108 19:51:55.712418   92604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51580 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/newest-cni-103000/id_rsa Username:docker}
	I0108 19:51:55.915215   92604 ssh_runner.go:195] Run: systemctl --version
	I0108 19:51:55.920263   92604 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 19:51:55.925057   92604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0108 19:51:55.941333   92604 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0108 19:51:55.941434   92604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 19:51:55.949628   92604 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0108 19:51:55.949649   92604 start.go:475] detecting cgroup driver to use...
	I0108 19:51:55.949673   92604 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 19:51:55.949783   92604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 19:51:55.964307   92604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0108 19:51:55.973529   92604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0108 19:51:55.982700   92604 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0108 19:51:55.982764   92604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0108 19:51:55.992114   92604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 19:51:56.001361   92604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0108 19:51:56.010464   92604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 19:51:56.019774   92604 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 19:51:56.028793   92604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0108 19:51:56.038246   92604 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 19:51:56.046320   92604 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 19:51:56.054258   92604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 19:51:56.107427   92604 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 19:51:56.183599   92604 start.go:475] detecting cgroup driver to use...
	I0108 19:51:56.183621   92604 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 19:51:56.183711   92604 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 19:51:56.203883   92604 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0108 19:51:56.203959   92604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 19:51:56.216470   92604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 19:51:56.233332   92604 ssh_runner.go:195] Run: which cri-dockerd
	I0108 19:51:56.238141   92604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0108 19:51:56.248375   92604 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0108 19:51:56.303656   92604 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 19:51:56.429029   92604 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 19:51:56.507336   92604 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I0108 19:51:56.507439   92604 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0108 19:51:56.523439   92604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 19:51:56.574843   92604 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 19:51:56.873411   92604 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0108 19:51:56.931345   92604 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0108 19:51:57.002571   92604 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0108 19:51:57.054913   92604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 19:51:57.103504   92604 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0108 19:51:57.126537   92604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 19:51:57.179929   92604 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0108 19:51:57.261152   92604 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0108 19:51:57.261254   92604 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0108 19:51:57.266172   92604 start.go:543] Will wait 60s for crictl version
	I0108 19:51:57.266236   92604 ssh_runner.go:195] Run: which crictl
	I0108 19:51:57.270211   92604 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 19:51:57.316916   92604 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0108 19:51:57.316993   92604 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 19:51:57.341196   92604 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 19:51:57.388606   92604 out.go:204] * Preparing Kubernetes v1.29.0-rc.2 on Docker 24.0.7 ...
	I0108 19:51:57.388760   92604 cli_runner.go:164] Run: docker exec -t newest-cni-103000 dig +short host.docker.internal
	I0108 19:51:57.504485   92604 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0108 19:51:57.504593   92604 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0108 19:51:57.509102   92604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 19:51:57.519319   92604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-103000
	I0108 19:51:57.591160   92604 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0108 19:51:57.612775   92604 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0108 19:51:57.612949   92604 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 19:51:57.634052   92604 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0108 19:51:57.634080   92604 docker.go:601] Images already preloaded, skipping extraction
	I0108 19:51:57.634179   92604 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 19:51:57.654620   92604 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0108 19:51:57.654641   92604 cache_images.go:84] Images are preloaded, skipping loading
	I0108 19:51:57.654729   92604 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0108 19:51:57.705672   92604 cni.go:84] Creating CNI manager for ""
	I0108 19:51:57.705689   92604 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0108 19:51:57.705711   92604 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0108 19:51:57.705727   92604 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-103000 NodeName:newest-cni-103000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs
:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 19:51:57.705857   92604 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-103000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 19:51:57.705930   92604 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-103000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-103000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 19:51:57.705986   92604 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0108 19:51:57.715421   92604 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 19:51:57.715523   92604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 19:51:57.725400   92604 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (420 bytes)
	I0108 19:51:57.740963   92604 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0108 19:51:57.755979   92604 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I0108 19:51:57.771320   92604 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0108 19:51:57.775274   92604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 19:51:57.785521   92604 certs.go:56] Setting up /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/newest-cni-103000 for IP: 192.168.67.2
	I0108 19:51:57.785544   92604 certs.go:190] acquiring lock for shared ca certs: {Name:mk44dcbca6ce5cf77b3bf5ce2248b699d6553e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:51:57.785694   92604 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.key
	I0108 19:51:57.785751   92604 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.key
	I0108 19:51:57.785845   92604 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/newest-cni-103000/client.key
	I0108 19:51:57.785910   92604 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/newest-cni-103000/apiserver.key.c7fa3a9e
	I0108 19:51:57.785963   92604 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/newest-cni-103000/proxy-client.key
	I0108 19:51:57.786138   92604 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/75369.pem (1338 bytes)
	W0108 19:51:57.786171   92604 certs.go:433] ignoring /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/75369_empty.pem, impossibly tiny 0 bytes
	I0108 19:51:57.786180   92604 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 19:51:57.786214   92604 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem (1078 bytes)
	I0108 19:51:57.786250   92604 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem (1123 bytes)
	I0108 19:51:57.786278   92604 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/key.pem (1679 bytes)
	I0108 19:51:57.786342   92604 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem (1708 bytes)
	I0108 19:51:57.786927   92604 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/newest-cni-103000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 19:51:57.807497   92604 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/newest-cni-103000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 19:51:57.827882   92604 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/newest-cni-103000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 19:51:57.848429   92604 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/newest-cni-103000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 19:51:57.868811   92604 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 19:51:57.889119   92604 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 19:51:57.909497   92604 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 19:51:57.929909   92604 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 19:51:57.949982   92604 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 19:51:57.970462   92604 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/75369.pem --> /usr/share/ca-certificates/75369.pem (1338 bytes)
	I0108 19:51:57.991084   92604 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem --> /usr/share/ca-certificates/753692.pem (1708 bytes)
	I0108 19:51:58.011528   92604 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 19:51:58.027114   92604 ssh_runner.go:195] Run: openssl version
	I0108 19:51:58.032597   92604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 19:51:58.041582   92604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 19:51:58.045523   92604 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  9 02:33 /usr/share/ca-certificates/minikubeCA.pem
	I0108 19:51:58.045571   92604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 19:51:58.051894   92604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 19:51:58.060070   92604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75369.pem && ln -fs /usr/share/ca-certificates/75369.pem /etc/ssl/certs/75369.pem"
	I0108 19:51:58.068941   92604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75369.pem
	I0108 19:51:58.073035   92604 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  9 02:38 /usr/share/ca-certificates/75369.pem
	I0108 19:51:58.073083   92604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75369.pem
	I0108 19:51:58.079898   92604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/75369.pem /etc/ssl/certs/51391683.0"
	I0108 19:51:58.088149   92604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/753692.pem && ln -fs /usr/share/ca-certificates/753692.pem /etc/ssl/certs/753692.pem"
	I0108 19:51:58.097035   92604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/753692.pem
	I0108 19:51:58.101149   92604 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  9 02:38 /usr/share/ca-certificates/753692.pem
	I0108 19:51:58.101195   92604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/753692.pem
	I0108 19:51:58.107713   92604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/753692.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 19:51:58.115708   92604 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 19:51:58.119955   92604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 19:51:58.126341   92604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 19:51:58.132610   92604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 19:51:58.138750   92604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 19:51:58.145103   92604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 19:51:58.151230   92604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 19:51:58.157448   92604 kubeadm.go:404] StartCluster: {Name:newest-cni-103000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-103000 Namespace:default APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: S
ubnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 19:51:58.157560   92604 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 19:51:58.175779   92604 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 19:51:58.184205   92604 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0108 19:51:58.184227   92604 kubeadm.go:636] restartCluster start
	I0108 19:51:58.184281   92604 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 19:51:58.192087   92604 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:51:58.192175   92604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-103000
	I0108 19:51:58.243400   92604 kubeconfig.go:135] verify returned: extract IP: "newest-cni-103000" does not appear in /Users/jenkins/minikube-integration/17866-74927/kubeconfig
	I0108 19:51:58.243552   92604 kubeconfig.go:146] "newest-cni-103000" context is missing from /Users/jenkins/minikube-integration/17866-74927/kubeconfig - will repair!
	I0108 19:51:58.243870   92604 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/kubeconfig: {Name:mka56893876a255b4247f6735103824515326092 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:51:58.245255   92604 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 19:51:58.253698   92604 api_server.go:166] Checking apiserver status ...
	I0108 19:51:58.253768   92604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:51:58.262892   92604 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:51:58.754520   92604 api_server.go:166] Checking apiserver status ...
	I0108 19:51:58.754628   92604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:51:58.766013   92604 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:51:59.254113   92604 api_server.go:166] Checking apiserver status ...
	I0108 19:51:59.254304   92604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:51:59.265750   92604 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:51:59.754555   92604 api_server.go:166] Checking apiserver status ...
	I0108 19:51:59.754684   92604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:51:59.765801   92604 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:52:00.255729   92604 api_server.go:166] Checking apiserver status ...
	I0108 19:52:00.255890   92604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:52:00.267187   92604 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:52:00.754507   92604 api_server.go:166] Checking apiserver status ...
	I0108 19:52:00.754636   92604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:52:00.766188   92604 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:52:01.255778   92604 api_server.go:166] Checking apiserver status ...
	I0108 19:52:01.255944   92604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:52:01.267317   92604 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:52:01.754334   92604 api_server.go:166] Checking apiserver status ...
	I0108 19:52:01.754438   92604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:52:01.765852   92604 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:52:02.254273   92604 api_server.go:166] Checking apiserver status ...
	I0108 19:52:02.254440   92604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:52:02.265791   92604 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:52:02.754292   92604 api_server.go:166] Checking apiserver status ...
	I0108 19:52:02.754402   92604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:52:02.766064   92604 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:52:03.255718   92604 api_server.go:166] Checking apiserver status ...
	I0108 19:52:03.255900   92604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:52:03.267472   92604 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:52:03.754685   92604 api_server.go:166] Checking apiserver status ...
	I0108 19:52:03.754811   92604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:52:03.765994   92604 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:52:04.254873   92604 api_server.go:166] Checking apiserver status ...
	I0108 19:52:04.254978   92604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:52:04.266125   92604 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:52:04.755646   92604 api_server.go:166] Checking apiserver status ...
	I0108 19:52:04.755783   92604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:52:04.767330   92604 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:52:05.255238   92604 api_server.go:166] Checking apiserver status ...
	I0108 19:52:05.255339   92604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:52:05.266840   92604 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:52:05.754498   92604 api_server.go:166] Checking apiserver status ...
	I0108 19:52:05.754653   92604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:52:05.765793   92604 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:52:06.254831   92604 api_server.go:166] Checking apiserver status ...
	I0108 19:52:06.254981   92604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:52:06.266529   92604 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:52:06.754695   92604 api_server.go:166] Checking apiserver status ...
	I0108 19:52:06.754822   92604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:52:06.765729   92604 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:52:07.254375   92604 api_server.go:166] Checking apiserver status ...
	I0108 19:52:07.254496   92604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:52:07.265771   92604 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:52:07.755092   92604 api_server.go:166] Checking apiserver status ...
	I0108 19:52:07.755204   92604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:52:07.766546   92604 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:52:08.253976   92604 api_server.go:166] Checking apiserver status ...
	I0108 19:52:08.254119   92604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 19:52:08.265805   92604 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:52:08.265820   92604 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0108 19:52:08.265829   92604 kubeadm.go:1135] stopping kube-system containers ...
	I0108 19:52:08.265904   92604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 19:52:08.285036   92604 docker.go:469] Stopping containers: [2815e3a4450e 1633e7e69ccd 98aaa22f15f1 c1795d5ced6a e5d68632d0cb fca5044dbd13 fb37be2f3322 720131dae599 b5b2bc020c8f 280f80d692d6 2c52d267624b 38ab1a0dd5cf bcd3bb7e4410 1f2f2c45a0db c95e2046ec17]
	I0108 19:52:08.285145   92604 ssh_runner.go:195] Run: docker stop 2815e3a4450e 1633e7e69ccd 98aaa22f15f1 c1795d5ced6a e5d68632d0cb fca5044dbd13 fb37be2f3322 720131dae599 b5b2bc020c8f 280f80d692d6 2c52d267624b 38ab1a0dd5cf bcd3bb7e4410 1f2f2c45a0db c95e2046ec17
	I0108 19:52:08.306118   92604 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 19:52:08.317252   92604 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 19:52:08.325462   92604 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5651 Jan  9 03:51 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jan  9 03:51 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jan  9 03:51 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan  9 03:51 /etc/kubernetes/scheduler.conf
	
	I0108 19:52:08.325524   92604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0108 19:52:08.333963   92604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0108 19:52:08.342366   92604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0108 19:52:08.350242   92604 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:52:08.350302   92604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0108 19:52:08.358303   92604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0108 19:52:08.366496   92604 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 19:52:08.366556   92604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0108 19:52:08.374546   92604 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 19:52:08.382775   92604 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 19:52:08.382789   92604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 19:52:08.429441   92604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 19:52:08.922231   92604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 19:52:09.052539   92604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 19:52:09.098946   92604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 19:52:09.144701   92604 api_server.go:52] waiting for apiserver process to appear ...
	I0108 19:52:09.144776   92604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:52:09.644835   92604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:52:10.146936   92604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:52:10.214262   92604 api_server.go:72] duration metric: took 1.069584239s to wait for apiserver process to appear ...
	I0108 19:52:10.214280   92604 api_server.go:88] waiting for apiserver healthz status ...
	I0108 19:52:10.214303   92604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51579/healthz ...
	I0108 19:52:10.215891   92604 api_server.go:269] stopped: https://127.0.0.1:51579/healthz: Get "https://127.0.0.1:51579/healthz": EOF
	I0108 19:52:10.715145   92604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51579/healthz ...
	I0108 19:52:12.641908   92604 api_server.go:279] https://127.0.0.1:51579/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 19:52:12.641944   92604 api_server.go:103] status: https://127.0.0.1:51579/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 19:52:12.641981   92604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51579/healthz ...
	I0108 19:52:12.716543   92604 api_server.go:279] https://127.0.0.1:51579/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 19:52:12.716576   92604 api_server.go:103] status: https://127.0.0.1:51579/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 19:52:12.716593   92604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51579/healthz ...
	I0108 19:52:12.722907   92604 api_server.go:279] https://127.0.0.1:51579/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 19:52:12.722923   92604 api_server.go:103] status: https://127.0.0.1:51579/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 19:52:13.214993   92604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51579/healthz ...
	I0108 19:52:13.221194   92604 api_server.go:279] https://127.0.0.1:51579/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 19:52:13.221222   92604 api_server.go:103] status: https://127.0.0.1:51579/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 19:52:13.714428   92604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51579/healthz ...
	I0108 19:52:13.721419   92604 api_server.go:279] https://127.0.0.1:51579/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 19:52:13.721448   92604 api_server.go:103] status: https://127.0.0.1:51579/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 19:52:14.214382   92604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51579/healthz ...
	I0108 19:52:14.221483   92604 api_server.go:279] https://127.0.0.1:51579/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 19:52:14.221505   92604 api_server.go:103] status: https://127.0.0.1:51579/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 19:52:14.714299   92604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51579/healthz ...
	I0108 19:52:14.720603   92604 api_server.go:279] https://127.0.0.1:51579/healthz returned 200:
	ok
	I0108 19:52:14.727568   92604 api_server.go:141] control plane version: v1.29.0-rc.2
	I0108 19:52:14.746889   92604 api_server.go:131] duration metric: took 4.532716807s to wait for apiserver health ...
	I0108 19:52:14.746903   92604 cni.go:84] Creating CNI manager for ""
	I0108 19:52:14.746925   92604 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0108 19:52:14.769950   92604 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 19:52:14.791121   92604 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 19:52:14.801730   92604 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 19:52:14.817260   92604 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 19:52:14.825207   92604 system_pods.go:59] 8 kube-system pods found
	I0108 19:52:14.825222   92604 system_pods.go:61] "coredns-76f75df574-d4ls2" [74992a9f-2138-4b4d-93b4-4d6516780733] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 19:52:14.825229   92604 system_pods.go:61] "etcd-newest-cni-103000" [6118beef-e8b6-4f3d-a86b-37e8ee3b3b3a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 19:52:14.825235   92604 system_pods.go:61] "kube-apiserver-newest-cni-103000" [0dbd57fc-10a5-4ffb-9dfb-822de2258a63] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 19:52:14.825240   92604 system_pods.go:61] "kube-controller-manager-newest-cni-103000" [3389a07f-f60a-417d-b3b6-1676727259b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 19:52:14.825244   92604 system_pods.go:61] "kube-proxy-t8v4p" [45421e82-be11-4d8d-99c9-3904c8f05b61] Running
	I0108 19:52:14.825249   92604 system_pods.go:61] "kube-scheduler-newest-cni-103000" [648b0dbb-ad18-44c9-ab8e-5d3b44d14c22] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 19:52:14.825260   92604 system_pods.go:61] "metrics-server-57f55c9bc5-8bvfm" [f6d6ff58-6bfb-472b-9979-c143a465b221] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 19:52:14.825264   92604 system_pods.go:61] "storage-provisioner" [7a7cb1e0-e26c-4056-a6c6-e8d367a5f7ff] Running
	I0108 19:52:14.825269   92604 system_pods.go:74] duration metric: took 7.997053ms to wait for pod list to return data ...
	I0108 19:52:14.825276   92604 node_conditions.go:102] verifying NodePressure condition ...
	I0108 19:52:14.828246   92604 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I0108 19:52:14.828260   92604 node_conditions.go:123] node cpu capacity is 12
	I0108 19:52:14.828271   92604 node_conditions.go:105] duration metric: took 2.989853ms to run NodePressure ...
	I0108 19:52:14.828288   92604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 19:52:15.078395   92604 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 19:52:15.086222   92604 ops.go:34] apiserver oom_adj: -16
	I0108 19:52:15.086238   92604 kubeadm.go:640] restartCluster took 16.902447441s
	I0108 19:52:15.086256   92604 kubeadm.go:406] StartCluster complete in 16.929259979s
	I0108 19:52:15.086269   92604 settings.go:142] acquiring lock: {Name:mk7fdf0cdaaa885ecc8ed27d1c431ecf7550f639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:52:15.086346   92604 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17866-74927/kubeconfig
	I0108 19:52:15.086966   92604 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/kubeconfig: {Name:mka56893876a255b4247f6735103824515326092 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 19:52:15.087246   92604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 19:52:15.087269   92604 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 19:52:15.087318   92604 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-103000"
	I0108 19:52:15.087337   92604 addons.go:237] Setting addon storage-provisioner=true in "newest-cni-103000"
	W0108 19:52:15.087344   92604 addons.go:246] addon storage-provisioner should already be in state true
	I0108 19:52:15.087347   92604 addons.go:69] Setting metrics-server=true in profile "newest-cni-103000"
	I0108 19:52:15.087349   92604 addons.go:69] Setting dashboard=true in profile "newest-cni-103000"
	I0108 19:52:15.087377   92604 addons.go:237] Setting addon metrics-server=true in "newest-cni-103000"
	I0108 19:52:15.087377   92604 config.go:182] Loaded profile config "newest-cni-103000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	W0108 19:52:15.087385   92604 addons.go:246] addon metrics-server should already be in state true
	I0108 19:52:15.087386   92604 addons.go:237] Setting addon dashboard=true in "newest-cni-103000"
	I0108 19:52:15.087337   92604 addons.go:69] Setting default-storageclass=true in profile "newest-cni-103000"
	W0108 19:52:15.087398   92604 addons.go:246] addon dashboard should already be in state true
	I0108 19:52:15.087399   92604 host.go:66] Checking if "newest-cni-103000" exists ...
	I0108 19:52:15.087415   92604 host.go:66] Checking if "newest-cni-103000" exists ...
	I0108 19:52:15.087423   92604 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-103000"
	I0108 19:52:15.087460   92604 host.go:66] Checking if "newest-cni-103000" exists ...
	I0108 19:52:15.087761   92604 cli_runner.go:164] Run: docker container inspect newest-cni-103000 --format={{.State.Status}}
	I0108 19:52:15.087779   92604 cli_runner.go:164] Run: docker container inspect newest-cni-103000 --format={{.State.Status}}
	I0108 19:52:15.087793   92604 cli_runner.go:164] Run: docker container inspect newest-cni-103000 --format={{.State.Status}}
	I0108 19:52:15.087810   92604 cli_runner.go:164] Run: docker container inspect newest-cni-103000 --format={{.State.Status}}
	I0108 19:52:15.095578   92604 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-103000" context rescaled to 1 replicas
	I0108 19:52:15.095702   92604 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0108 19:52:15.117757   92604 out.go:177] * Verifying Kubernetes components...
	I0108 19:52:15.159521   92604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 19:52:15.165734   92604 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0108 19:52:15.193143   92604 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0108 19:52:15.213281   92604 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0108 19:52:15.234203   92604 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 19:52:15.170574   92604 addons.go:237] Setting addon default-storageclass=true in "newest-cni-103000"
	I0108 19:52:15.174426   92604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-103000
	W0108 19:52:15.255549   92604 addons.go:246] addon default-storageclass should already be in state true
	I0108 19:52:15.255700   92604 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 19:52:15.278397   92604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 19:52:15.278429   92604 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 19:52:15.278440   92604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 19:52:15.278471   92604 host.go:66] Checking if "newest-cni-103000" exists ...
	I0108 19:52:15.301208   92604 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0108 19:52:15.278516   92604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-103000
	I0108 19:52:15.278516   92604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-103000
	I0108 19:52:15.278851   92604 cli_runner.go:164] Run: docker container inspect newest-cni-103000 --format={{.State.Status}}
	I0108 19:52:15.322458   92604 addons.go:429] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0108 19:52:15.322478   92604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0108 19:52:15.322610   92604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-103000
	I0108 19:52:15.330734   92604 api_server.go:52] waiting for apiserver process to appear ...
	I0108 19:52:15.330810   92604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 19:52:15.343876   92604 api_server.go:72] duration metric: took 248.149905ms to wait for apiserver process to appear ...
	I0108 19:52:15.343922   92604 api_server.go:88] waiting for apiserver healthz status ...
	I0108 19:52:15.343967   92604 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51579/healthz ...
	I0108 19:52:15.350434   92604 api_server.go:279] https://127.0.0.1:51579/healthz returned 200:
	ok
	I0108 19:52:15.352845   92604 api_server.go:141] control plane version: v1.29.0-rc.2
	I0108 19:52:15.352863   92604 api_server.go:131] duration metric: took 8.934611ms to wait for apiserver health ...
	I0108 19:52:15.352871   92604 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 19:52:15.361225   92604 system_pods.go:59] 8 kube-system pods found
	I0108 19:52:15.361242   92604 system_pods.go:61] "coredns-76f75df574-d4ls2" [74992a9f-2138-4b4d-93b4-4d6516780733] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 19:52:15.361249   92604 system_pods.go:61] "etcd-newest-cni-103000" [6118beef-e8b6-4f3d-a86b-37e8ee3b3b3a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 19:52:15.361261   92604 system_pods.go:61] "kube-apiserver-newest-cni-103000" [0dbd57fc-10a5-4ffb-9dfb-822de2258a63] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 19:52:15.361269   92604 system_pods.go:61] "kube-controller-manager-newest-cni-103000" [3389a07f-f60a-417d-b3b6-1676727259b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 19:52:15.361274   92604 system_pods.go:61] "kube-proxy-t8v4p" [45421e82-be11-4d8d-99c9-3904c8f05b61] Running
	I0108 19:52:15.361281   92604 system_pods.go:61] "kube-scheduler-newest-cni-103000" [648b0dbb-ad18-44c9-ab8e-5d3b44d14c22] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 19:52:15.361287   92604 system_pods.go:61] "metrics-server-57f55c9bc5-8bvfm" [f6d6ff58-6bfb-472b-9979-c143a465b221] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 19:52:15.361291   92604 system_pods.go:61] "storage-provisioner" [7a7cb1e0-e26c-4056-a6c6-e8d367a5f7ff] Running
	I0108 19:52:15.361296   92604 system_pods.go:74] duration metric: took 8.375167ms to wait for pod list to return data ...
	I0108 19:52:15.361308   92604 default_sa.go:34] waiting for default service account to be created ...
	I0108 19:52:15.362486   92604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51580 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/newest-cni-103000/id_rsa Username:docker}
	I0108 19:52:15.365751   92604 default_sa.go:45] found service account: "default"
	I0108 19:52:15.365772   92604 default_sa.go:55] duration metric: took 4.45017ms for default service account to be created ...
	I0108 19:52:15.365780   92604 kubeadm.go:581] duration metric: took 270.060149ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0108 19:52:15.365796   92604 node_conditions.go:102] verifying NodePressure condition ...
	I0108 19:52:15.369433   92604 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I0108 19:52:15.369448   92604 node_conditions.go:123] node cpu capacity is 12
	I0108 19:52:15.369463   92604 node_conditions.go:105] duration metric: took 3.661559ms to run NodePressure ...
	I0108 19:52:15.369474   92604 start.go:228] waiting for startup goroutines ...
	I0108 19:52:15.393121   92604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51580 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/newest-cni-103000/id_rsa Username:docker}
	I0108 19:52:15.393119   92604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51580 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/newest-cni-103000/id_rsa Username:docker}
	I0108 19:52:15.393230   92604 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 19:52:15.393247   92604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 19:52:15.393334   92604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-103000
	I0108 19:52:15.457819   92604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51580 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/newest-cni-103000/id_rsa Username:docker}
	I0108 19:52:15.475155   92604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 19:52:15.500333   92604 addons.go:429] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0108 19:52:15.500352   92604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0108 19:52:15.500360   92604 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 19:52:15.500373   92604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0108 19:52:15.519672   92604 addons.go:429] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0108 19:52:15.519687   92604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0108 19:52:15.519667   92604 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 19:52:15.519722   92604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 19:52:15.541319   92604 addons.go:429] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0108 19:52:15.541338   92604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0108 19:52:15.541319   92604 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 19:52:15.541374   92604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 19:52:15.607068   92604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 19:52:15.611146   92604 addons.go:429] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0108 19:52:15.611158   92604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0108 19:52:15.614109   92604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 19:52:15.632870   92604 addons.go:429] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0108 19:52:15.632930   92604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0108 19:52:15.716316   92604 addons.go:429] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0108 19:52:15.716330   92604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0108 19:52:15.734901   92604 addons.go:429] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0108 19:52:15.734930   92604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0108 19:52:15.820853   92604 addons.go:429] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0108 19:52:15.820922   92604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0108 19:52:15.911832   92604 addons.go:429] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 19:52:15.911847   92604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0108 19:52:15.930628   92604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 19:52:16.524343   92604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.049190747s)
	I0108 19:52:16.650505   92604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.043435777s)
	I0108 19:52:16.650516   92604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.036412426s)
	I0108 19:52:16.650531   92604 addons.go:473] Verifying addon metrics-server=true in "newest-cni-103000"
	I0108 19:52:16.845979   92604 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-103000 addons enable metrics-server	
	
	
	I0108 19:52:16.890151   92604 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0108 19:52:16.949106   92604 addons.go:508] enable addons completed in 1.861891147s: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0108 19:52:16.949149   92604 start.go:233] waiting for cluster config update ...
	I0108 19:52:16.949167   92604 start.go:242] writing updated cluster config ...
	I0108 19:52:16.949637   92604 ssh_runner.go:195] Run: rm -f paused
	I0108 19:52:16.991134   92604 start.go:600] kubectl: 1.28.2, cluster: 1.29.0-rc.2 (minor skew: 1)
	I0108 19:52:17.012186   92604 out.go:177] * Done! kubectl is now configured to use "newest-cni-103000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jan 09 03:28:21 old-k8s-version-901000 dockerd[723]: time="2024-01-09T03:28:21.748451198Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 09 03:28:21 old-k8s-version-901000 dockerd[723]: time="2024-01-09T03:28:21.784302399Z" level=info msg="Loading containers: done."
	Jan 09 03:28:21 old-k8s-version-901000 dockerd[723]: time="2024-01-09T03:28:21.792131546Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 09 03:28:21 old-k8s-version-901000 dockerd[723]: time="2024-01-09T03:28:21.792197106Z" level=info msg="Daemon has completed initialization"
	Jan 09 03:28:21 old-k8s-version-901000 systemd[1]: Started Docker Application Container Engine.
	Jan 09 03:28:21 old-k8s-version-901000 dockerd[723]: time="2024-01-09T03:28:21.822909958Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 09 03:28:21 old-k8s-version-901000 dockerd[723]: time="2024-01-09T03:28:21.822949228Z" level=info msg="API listen on [::]:2376"
	Jan 09 03:28:28 old-k8s-version-901000 systemd[1]: Stopping Docker Application Container Engine...
	Jan 09 03:28:28 old-k8s-version-901000 dockerd[723]: time="2024-01-09T03:28:28.864005532Z" level=info msg="Processing signal 'terminated'"
	Jan 09 03:28:28 old-k8s-version-901000 dockerd[723]: time="2024-01-09T03:28:28.864821872Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 09 03:28:28 old-k8s-version-901000 dockerd[723]: time="2024-01-09T03:28:28.865266652Z" level=info msg="Daemon shutdown complete"
	Jan 09 03:28:28 old-k8s-version-901000 dockerd[723]: time="2024-01-09T03:28:28.865552359Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 09 03:28:28 old-k8s-version-901000 systemd[1]: docker.service: Deactivated successfully.
	Jan 09 03:28:28 old-k8s-version-901000 systemd[1]: Stopped Docker Application Container Engine.
	Jan 09 03:28:28 old-k8s-version-901000 systemd[1]: Starting Docker Application Container Engine...
	Jan 09 03:28:28 old-k8s-version-901000 dockerd[949]: time="2024-01-09T03:28:28.917820203Z" level=info msg="Starting up"
	Jan 09 03:28:28 old-k8s-version-901000 dockerd[949]: time="2024-01-09T03:28:28.928976131Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 09 03:28:29 old-k8s-version-901000 dockerd[949]: time="2024-01-09T03:28:29.078619687Z" level=info msg="Loading containers: start."
	Jan 09 03:28:29 old-k8s-version-901000 dockerd[949]: time="2024-01-09T03:28:29.160686378Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 09 03:28:29 old-k8s-version-901000 dockerd[949]: time="2024-01-09T03:28:29.195694564Z" level=info msg="Loading containers: done."
	Jan 09 03:28:29 old-k8s-version-901000 dockerd[949]: time="2024-01-09T03:28:29.203622773Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 09 03:28:29 old-k8s-version-901000 dockerd[949]: time="2024-01-09T03:28:29.203680796Z" level=info msg="Daemon has completed initialization"
	Jan 09 03:28:29 old-k8s-version-901000 dockerd[949]: time="2024-01-09T03:28:29.230851290Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 09 03:28:29 old-k8s-version-901000 dockerd[949]: time="2024-01-09T03:28:29.231055167Z" level=info msg="API listen on [::]:2376"
	Jan 09 03:28:29 old-k8s-version-901000 systemd[1]: Started Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2024-01-09T03:52:25Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	
	
	==> describe nodes <==
	
	==> dmesg <==
	
	
	==> kernel <==
	 03:52:25 up  3:11,  0 users,  load average: 0.75, 0.59, 0.77
	Linux old-k8s-version-901000 6.5.11-linuxkit #1 SMP PREEMPT_DYNAMIC Mon Dec  4 10:03:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kubelet <==
	Jan 09 03:52:24 old-k8s-version-901000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 09 03:52:24 old-k8s-version-901000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1389.
	Jan 09 03:52:24 old-k8s-version-901000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 09 03:52:24 old-k8s-version-901000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 09 03:52:24 old-k8s-version-901000 kubelet[42654]: I0109 03:52:24.963791   42654 server.go:410] Version: v1.16.0
	Jan 09 03:52:24 old-k8s-version-901000 kubelet[42654]: I0109 03:52:24.964056   42654 plugins.go:100] No cloud provider specified.
	Jan 09 03:52:24 old-k8s-version-901000 kubelet[42654]: I0109 03:52:24.964067   42654 server.go:773] Client rotation is on, will bootstrap in background
	Jan 09 03:52:24 old-k8s-version-901000 kubelet[42654]: I0109 03:52:24.966738   42654 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 09 03:52:24 old-k8s-version-901000 kubelet[42654]: W0109 03:52:24.967397   42654 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 09 03:52:24 old-k8s-version-901000 kubelet[42654]: W0109 03:52:24.967459   42654 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 09 03:52:24 old-k8s-version-901000 kubelet[42654]: F0109 03:52:24.967480   42654 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 09 03:52:24 old-k8s-version-901000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 09 03:52:24 old-k8s-version-901000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 09 03:52:25 old-k8s-version-901000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1390.
	Jan 09 03:52:25 old-k8s-version-901000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 09 03:52:25 old-k8s-version-901000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 09 03:52:25 old-k8s-version-901000 kubelet[42774]: I0109 03:52:25.705148   42774 server.go:410] Version: v1.16.0
	Jan 09 03:52:25 old-k8s-version-901000 kubelet[42774]: I0109 03:52:25.705351   42774 plugins.go:100] No cloud provider specified.
	Jan 09 03:52:25 old-k8s-version-901000 kubelet[42774]: I0109 03:52:25.705360   42774 server.go:773] Client rotation is on, will bootstrap in background
	Jan 09 03:52:25 old-k8s-version-901000 kubelet[42774]: I0109 03:52:25.707312   42774 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 09 03:52:25 old-k8s-version-901000 kubelet[42774]: W0109 03:52:25.708731   42774 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 09 03:52:25 old-k8s-version-901000 kubelet[42774]: W0109 03:52:25.708803   42774 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 09 03:52:25 old-k8s-version-901000 kubelet[42774]: F0109 03:52:25.708835   42774 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 09 03:52:25 old-k8s-version-901000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 09 03:52:25 old-k8s-version-901000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 19:52:25.604567   92831 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-901000 -n old-k8s-version-901000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-901000 -n old-k8s-version-901000: exit status 2 (411.026445ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-901000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (402.77s)

                                                
                                    

Test pass (292/329)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 17.86
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.31
10 TestDownloadOnly/v1.28.4/json-events 14.41
11 TestDownloadOnly/v1.28.4/preload-exists 0
14 TestDownloadOnly/v1.28.4/kubectl 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.3
17 TestDownloadOnly/v1.29.0-rc.2/json-events 42.08
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
21 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.3
23 TestDownloadOnly/DeleteAll 0.64
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.38
25 TestDownloadOnlyKic 1.95
26 TestBinaryMirror 1.61
27 TestOffline 43.94
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.19
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.21
32 TestAddons/Setup 153.72
36 TestAddons/parallel/InspektorGadget 10.83
37 TestAddons/parallel/MetricsServer 6.82
38 TestAddons/parallel/HelmTiller 10.05
40 TestAddons/parallel/CSI 74.37
41 TestAddons/parallel/Headlamp 13.42
42 TestAddons/parallel/CloudSpanner 6.67
43 TestAddons/parallel/LocalPath 54.43
44 TestAddons/parallel/NvidiaDevicePlugin 6.05
45 TestAddons/parallel/Yakd 5
48 TestAddons/serial/GCPAuth/Namespaces 0.11
49 TestAddons/StoppedEnableDisable 11.8
50 TestCertOptions 24.35
51 TestCertExpiration 231.84
52 TestDockerFlags 25.96
53 TestForceSystemdFlag 26.6
54 TestForceSystemdEnv 27.6
57 TestHyperKitDriverInstallOrUpdate 8.57
60 TestErrorSpam/setup 20.88
61 TestErrorSpam/start 2.04
62 TestErrorSpam/status 1.2
63 TestErrorSpam/pause 1.69
64 TestErrorSpam/unpause 1.71
65 TestErrorSpam/stop 2.83
68 TestFunctional/serial/CopySyncFile 0
69 TestFunctional/serial/StartWithProxy 36.61
70 TestFunctional/serial/AuditLog 0
71 TestFunctional/serial/SoftStart 35.96
72 TestFunctional/serial/KubeContext 0.04
73 TestFunctional/serial/KubectlGetPods 0.07
76 TestFunctional/serial/CacheCmd/cache/add_remote 3.64
77 TestFunctional/serial/CacheCmd/cache/add_local 1.63
78 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
79 TestFunctional/serial/CacheCmd/cache/list 0.08
80 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.42
81 TestFunctional/serial/CacheCmd/cache/cache_reload 2.05
82 TestFunctional/serial/CacheCmd/cache/delete 0.17
83 TestFunctional/serial/MinikubeKubectlCmd 0.56
84 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.78
85 TestFunctional/serial/ExtraConfig 36.65
86 TestFunctional/serial/ComponentHealth 0.06
87 TestFunctional/serial/LogsCmd 2.99
88 TestFunctional/serial/LogsFileCmd 3.13
89 TestFunctional/serial/InvalidService 5.71
91 TestFunctional/parallel/ConfigCmd 0.49
92 TestFunctional/parallel/DashboardCmd 10.01
93 TestFunctional/parallel/DryRun 1.38
94 TestFunctional/parallel/InternationalLanguage 0.64
95 TestFunctional/parallel/StatusCmd 1.22
100 TestFunctional/parallel/AddonsCmd 0.26
101 TestFunctional/parallel/PersistentVolumeClaim 27.5
103 TestFunctional/parallel/SSHCmd 0.76
104 TestFunctional/parallel/CpCmd 2.8
105 TestFunctional/parallel/MySQL 32.82
106 TestFunctional/parallel/FileSync 0.43
107 TestFunctional/parallel/CertSync 2.5
111 TestFunctional/parallel/NodeLabels 0.06
113 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
115 TestFunctional/parallel/License 0.63
116 TestFunctional/parallel/Version/short 0.1
117 TestFunctional/parallel/Version/components 1.06
118 TestFunctional/parallel/ImageCommands/ImageListShort 0.55
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
122 TestFunctional/parallel/ImageCommands/ImageBuild 2.52
123 TestFunctional/parallel/ImageCommands/Setup 2.46
124 TestFunctional/parallel/DockerEnv/bash 2.06
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.94
126 TestFunctional/parallel/UpdateContextCmd/no_changes 0.34
127 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.33
128 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.28
129 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.45
130 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.6
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.42
132 TestFunctional/parallel/ImageCommands/ImageRemove 0.86
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 3.02
134 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.67
135 TestFunctional/parallel/ServiceCmd/DeployApp 19.19
137 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.55
138 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
140 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.19
141 TestFunctional/parallel/ServiceCmd/List 0.62
142 TestFunctional/parallel/ServiceCmd/JSONOutput 0.61
143 TestFunctional/parallel/ServiceCmd/HTTPS 15.01
144 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
145 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
149 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
150 TestFunctional/parallel/ServiceCmd/Format 15
151 TestFunctional/parallel/ServiceCmd/URL 15
152 TestFunctional/parallel/ProfileCmd/profile_not_create 0.63
154 TestFunctional/parallel/ProfileCmd/profile_list 0.59
155 TestFunctional/parallel/ProfileCmd/profile_json_output 0.49
157 TestFunctional/parallel/MountCmd/VerifyCleanup 2.51
158 TestFunctional/delete_addon-resizer_images 0.13
159 TestFunctional/delete_my-image_image 0.05
160 TestFunctional/delete_minikube_cached_images 0.05
164 TestImageBuild/serial/Setup 21.77
165 TestImageBuild/serial/NormalBuild 1.77
166 TestImageBuild/serial/BuildWithBuildArg 0.94
167 TestImageBuild/serial/BuildWithDockerIgnore 0.74
168 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.76
178 TestJSONOutput/start/Command 36.81
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 0.59
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 0.62
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 10.98
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.76
203 TestKicCustomNetwork/create_custom_network 23.61
204 TestKicCustomNetwork/use_default_bridge_network 22.87
205 TestKicExistingNetwork 23.34
206 TestKicCustomSubnet 23.34
207 TestKicStaticIP 23.84
208 TestMainNoArgs 0.1
209 TestMinikubeProfile 49.54
212 TestMountStart/serial/StartWithMountFirst 7.39
213 TestMountStart/serial/VerifyMountFirst 0.39
214 TestMountStart/serial/StartWithMountSecond 7.28
215 TestMountStart/serial/VerifyMountSecond 0.4
216 TestMountStart/serial/DeleteFirst 2.17
217 TestMountStart/serial/VerifyMountPostDelete 0.38
218 TestMountStart/serial/Stop 1.56
219 TestMountStart/serial/RestartStopped 8.24
220 TestMountStart/serial/VerifyMountPostStop 0.39
223 TestMultiNode/serial/FreshStart2Nodes 62.9
224 TestMultiNode/serial/DeployApp2Nodes 39.04
225 TestMultiNode/serial/PingHostFrom2Pods 0.94
226 TestMultiNode/serial/AddNode 15.35
227 TestMultiNode/serial/MultiNodeLabels 0.06
228 TestMultiNode/serial/ProfileList 0.42
229 TestMultiNode/serial/CopyFile 13.84
230 TestMultiNode/serial/StopNode 2.93
231 TestMultiNode/serial/StartAfterStop 13.81
232 TestMultiNode/serial/RestartKeepsNodes 116.47
233 TestMultiNode/serial/DeleteNode 5.87
234 TestMultiNode/serial/StopMultiNode 21.92
235 TestMultiNode/serial/RestartMultiNode 59.75
236 TestMultiNode/serial/ValidateNameConflict 25.08
240 TestPreload 144.02
242 TestScheduledStopUnix 94.33
243 TestSkaffold 122.6
245 TestInsufficientStorage 9.99
261 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 10.1
262 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 12.51
263 TestStoppedBinaryUpgrade/Setup 0.96
265 TestStoppedBinaryUpgrade/MinikubeLogs 3.43
267 TestPause/serial/Start 37.28
268 TestPause/serial/SecondStartNoReconfiguration 39.96
269 TestPause/serial/Pause 0.69
270 TestPause/serial/VerifyStatus 0.4
271 TestPause/serial/Unpause 0.69
272 TestPause/serial/PauseAgain 0.75
273 TestPause/serial/DeletePaused 2.46
274 TestPause/serial/VerifyDeletedResources 0.54
283 TestNoKubernetes/serial/StartNoK8sWithVersion 0.36
284 TestNoKubernetes/serial/StartWithK8s 22.03
285 TestNoKubernetes/serial/StartWithStopK8s 16.87
286 TestNoKubernetes/serial/Start 7.33
287 TestNoKubernetes/serial/VerifyK8sNotRunning 0.36
288 TestNoKubernetes/serial/ProfileList 1.3
289 TestNoKubernetes/serial/Stop 1.53
290 TestNoKubernetes/serial/StartNoArgs 8.48
291 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.4
292 TestNetworkPlugins/group/auto/Start 76.13
293 TestNetworkPlugins/group/kindnet/Start 51.51
294 TestNetworkPlugins/group/auto/KubeletFlags 0.39
295 TestNetworkPlugins/group/auto/NetCatPod 11.31
296 TestNetworkPlugins/group/auto/DNS 0.15
297 TestNetworkPlugins/group/auto/Localhost 0.12
298 TestNetworkPlugins/group/auto/HairPin 0.12
299 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
300 TestNetworkPlugins/group/kindnet/KubeletFlags 0.4
301 TestNetworkPlugins/group/kindnet/NetCatPod 10.34
302 TestNetworkPlugins/group/calico/Start 74.87
303 TestNetworkPlugins/group/kindnet/DNS 0.18
304 TestNetworkPlugins/group/kindnet/Localhost 0.15
305 TestNetworkPlugins/group/kindnet/HairPin 0.14
306 TestNetworkPlugins/group/custom-flannel/Start 53.85
307 TestNetworkPlugins/group/calico/ControllerPod 6.01
308 TestNetworkPlugins/group/calico/KubeletFlags 0.39
309 TestNetworkPlugins/group/calico/NetCatPod 12.29
310 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.44
311 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.33
312 TestNetworkPlugins/group/calico/DNS 0.14
313 TestNetworkPlugins/group/calico/Localhost 0.13
314 TestNetworkPlugins/group/calico/HairPin 0.12
315 TestNetworkPlugins/group/custom-flannel/DNS 0.13
316 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
317 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
318 TestNetworkPlugins/group/false/Start 38.53
319 TestNetworkPlugins/group/enable-default-cni/Start 38.27
320 TestNetworkPlugins/group/false/KubeletFlags 0.4
321 TestNetworkPlugins/group/false/NetCatPod 12.26
322 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
323 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.33
324 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
325 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
326 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
327 TestNetworkPlugins/group/false/DNS 0.14
328 TestNetworkPlugins/group/false/Localhost 0.12
329 TestNetworkPlugins/group/false/HairPin 0.12
330 TestNetworkPlugins/group/flannel/Start 51.29
331 TestNetworkPlugins/group/bridge/Start 45.03
332 TestNetworkPlugins/group/bridge/KubeletFlags 0.4
333 TestNetworkPlugins/group/bridge/NetCatPod 11.26
334 TestNetworkPlugins/group/flannel/ControllerPod 6.01
335 TestNetworkPlugins/group/flannel/KubeletFlags 0.39
336 TestNetworkPlugins/group/flannel/NetCatPod 10.26
337 TestNetworkPlugins/group/bridge/DNS 0.17
338 TestNetworkPlugins/group/bridge/Localhost 0.13
339 TestNetworkPlugins/group/bridge/HairPin 0.12
340 TestNetworkPlugins/group/flannel/DNS 0.15
341 TestNetworkPlugins/group/flannel/Localhost 0.13
342 TestNetworkPlugins/group/flannel/HairPin 0.13
343 TestNetworkPlugins/group/kubenet/Start 65.19
346 TestNetworkPlugins/group/kubenet/KubeletFlags 0.4
347 TestNetworkPlugins/group/kubenet/NetCatPod 12.26
348 TestNetworkPlugins/group/kubenet/DNS 0.13
349 TestNetworkPlugins/group/kubenet/Localhost 0.12
350 TestNetworkPlugins/group/kubenet/HairPin 0.12
352 TestStartStop/group/no-preload/serial/FirstStart 149.55
353 TestStartStop/group/no-preload/serial/DeployApp 7.55
354 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.17
355 TestStartStop/group/no-preload/serial/Stop 11
358 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.44
359 TestStartStop/group/no-preload/serial/SecondStart 340.25
360 TestStartStop/group/old-k8s-version/serial/Stop 1.63
361 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.43
363 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.01
364 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
365 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.32
366 TestStartStop/group/no-preload/serial/Pause 3.36
368 TestStartStop/group/embed-certs/serial/FirstStart 75.28
369 TestStartStop/group/embed-certs/serial/DeployApp 9.31
370 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.1
371 TestStartStop/group/embed-certs/serial/Stop 10.97
372 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.47
373 TestStartStop/group/embed-certs/serial/SecondStart 560.9
375 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
376 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
377 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.31
378 TestStartStop/group/embed-certs/serial/Pause 3.34
380 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 36.6
381 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.3
382 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.11
383 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.99
384 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.44
385 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 333.46
387 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 19.01
388 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
389 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.32
390 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.44
392 TestStartStop/group/newest-cni/serial/FirstStart 33.25
393 TestStartStop/group/newest-cni/serial/DeployApp 0
394 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.2
395 TestStartStop/group/newest-cni/serial/Stop 10.93
396 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.44
397 TestStartStop/group/newest-cni/serial/SecondStart 27.82
398 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
399 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
400 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
401 TestStartStop/group/newest-cni/serial/Pause 3.21
x
+
TestDownloadOnly/v1.16.0/json-events (17.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-523000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-523000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (17.864016174s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (17.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-523000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-523000: exit status 85 (313.78109ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-523000 | jenkins | v1.32.0 | 08 Jan 24 18:31 PST |          |
	|         | -p download-only-523000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 18:31:29
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 18:31:29.978091   75371 out.go:296] Setting OutFile to fd 1 ...
	I0108 18:31:29.978422   75371 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 18:31:29.978427   75371 out.go:309] Setting ErrFile to fd 2...
	I0108 18:31:29.978431   75371 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 18:31:29.978610   75371 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17866-74927/.minikube/bin
	W0108 18:31:29.978715   75371 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17866-74927/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17866-74927/.minikube/config/config.json: no such file or directory
	I0108 18:31:29.980468   75371 out.go:303] Setting JSON to true
	I0108 18:31:30.003644   75371 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":34261,"bootTime":1704733228,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0108 18:31:30.003728   75371 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0108 18:31:30.024340   75371 out.go:97] [download-only-523000] minikube v1.32.0 on Darwin 14.2.1
	I0108 18:31:30.047086   75371 out.go:169] MINIKUBE_LOCATION=17866
	I0108 18:31:30.024547   75371 notify.go:220] Checking for updates...
	W0108 18:31:30.024554   75371 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball: no such file or directory
	I0108 18:31:30.090344   75371 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17866-74927/kubeconfig
	I0108 18:31:30.111347   75371 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 18:31:30.132360   75371 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 18:31:30.153419   75371 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube
	W0108 18:31:30.195469   75371 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 18:31:30.195957   75371 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 18:31:30.253237   75371 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0108 18:31:30.253389   75371 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 18:31:30.358781   75371 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:62 SystemTime:2024-01-09 02:31:30.349222228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0108 18:31:30.380350   75371 out.go:97] Using the docker driver based on user configuration
	I0108 18:31:30.380420   75371 start.go:298] selected driver: docker
	I0108 18:31:30.380460   75371 start.go:902] validating driver "docker" against <nil>
	I0108 18:31:30.380668   75371 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 18:31:30.484479   75371 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:62 SystemTime:2024-01-09 02:31:30.475086935 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0108 18:31:30.484651   75371 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0108 18:31:30.487885   75371 start_flags.go:392] Using suggested 5885MB memory alloc based on sys=32768MB, container=5933MB
	I0108 18:31:30.488028   75371 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0108 18:31:30.508790   75371 out.go:169] Using Docker Desktop driver with root privileges
	I0108 18:31:30.531261   75371 cni.go:84] Creating CNI manager for ""
	I0108 18:31:30.531302   75371 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0108 18:31:30.531319   75371 start_flags.go:321] config:
	{Name:download-only-523000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:5885 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-523000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 18:31:30.552919   75371 out.go:97] Starting control plane node download-only-523000 in cluster download-only-523000
	I0108 18:31:30.552970   75371 cache.go:121] Beginning downloading kic base image for docker with docker
	I0108 18:31:30.573851   75371 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0108 18:31:30.573964   75371 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0108 18:31:30.574073   75371 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0108 18:31:30.625637   75371 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0108 18:31:30.625889   75371 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0108 18:31:30.626022   75371 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0108 18:31:30.628804   75371 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0108 18:31:30.628825   75371 cache.go:56] Caching tarball of preloaded images
	I0108 18:31:30.628979   75371 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0108 18:31:30.649935   75371 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0108 18:31:30.650015   75371 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0108 18:31:30.740124   75371 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0108 18:31:40.282359   75371 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0108 18:31:40.282586   75371 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0108 18:31:40.832454   75371 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0108 18:31:40.832705   75371 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/download-only-523000/config.json ...
	I0108 18:31:40.832729   75371 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/download-only-523000/config.json: {Name:mk3cf92fbacf246c391b0f66cf8a3939e372d3fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 18:31:40.833006   75371 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0108 18:31:40.833289   75371 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	I0108 18:31:42.697349   75371 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-523000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (14.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-523000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-523000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker : (14.410525421s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (14.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-523000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-523000: exit status 85 (295.628584ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-523000 | jenkins | v1.32.0 | 08 Jan 24 18:31 PST |          |
	|         | -p download-only-523000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-523000 | jenkins | v1.32.0 | 08 Jan 24 18:31 PST |          |
	|         | -p download-only-523000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 18:31:48
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 18:31:48.156300   75404 out.go:296] Setting OutFile to fd 1 ...
	I0108 18:31:48.156609   75404 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 18:31:48.156616   75404 out.go:309] Setting ErrFile to fd 2...
	I0108 18:31:48.156620   75404 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 18:31:48.156806   75404 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17866-74927/.minikube/bin
	W0108 18:31:48.156911   75404 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17866-74927/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17866-74927/.minikube/config/config.json: no such file or directory
	I0108 18:31:48.158120   75404 out.go:303] Setting JSON to true
	I0108 18:31:48.180398   75404 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":34280,"bootTime":1704733228,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0108 18:31:48.180489   75404 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0108 18:31:48.202176   75404 out.go:97] [download-only-523000] minikube v1.32.0 on Darwin 14.2.1
	I0108 18:31:48.223156   75404 out.go:169] MINIKUBE_LOCATION=17866
	I0108 18:31:48.202357   75404 notify.go:220] Checking for updates...
	I0108 18:31:48.264902   75404 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17866-74927/kubeconfig
	I0108 18:31:48.286196   75404 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 18:31:48.307121   75404 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 18:31:48.327866   75404 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube
	W0108 18:31:48.369957   75404 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 18:31:48.370338   75404 config.go:182] Loaded profile config "download-only-523000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0108 18:31:48.370384   75404 start.go:810] api.Load failed for download-only-523000: filestore "download-only-523000": Docker machine "download-only-523000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 18:31:48.370464   75404 driver.go:392] Setting default libvirt URI to qemu:///system
	W0108 18:31:48.370486   75404 start.go:810] api.Load failed for download-only-523000: filestore "download-only-523000": Docker machine "download-only-523000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 18:31:48.425354   75404 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0108 18:31:48.425497   75404 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 18:31:48.526589   75404 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:62 SystemTime:2024-01-09 02:31:48.51701241 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfi
ned name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manag
es Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/do
cker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0108 18:31:48.547153   75404 out.go:97] Using the docker driver based on existing profile
	I0108 18:31:48.547169   75404 start.go:298] selected driver: docker
	I0108 18:31:48.547176   75404 start.go:902] validating driver "docker" against &{Name:download-only-523000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:5885 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-523000 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 18:31:48.547328   75404 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 18:31:48.647958   75404 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:62 SystemTime:2024-01-09 02:31:48.638639159 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0108 18:31:48.651160   75404 cni.go:84] Creating CNI manager for ""
	I0108 18:31:48.651183   75404 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0108 18:31:48.651200   75404 start_flags.go:321] config:
	{Name:download-only-523000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:5885 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-523000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 18:31:48.672343   75404 out.go:97] Starting control plane node download-only-523000 in cluster download-only-523000
	I0108 18:31:48.672373   75404 cache.go:121] Beginning downloading kic base image for docker with docker
	I0108 18:31:48.693378   75404 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0108 18:31:48.693436   75404 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0108 18:31:48.693504   75404 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0108 18:31:48.745286   75404 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0108 18:31:48.745471   75404 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0108 18:31:48.745492   75404 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0108 18:31:48.745497   75404 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0108 18:31:48.745505   75404 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0108 18:31:48.750345   75404 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0108 18:31:48.750358   75404 cache.go:56] Caching tarball of preloaded images
	I0108 18:31:48.751178   75404 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0108 18:31:48.772623   75404 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0108 18:31:48.772651   75404 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0108 18:31:48.853649   75404 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0108 18:31:55.568031   75404 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0108 18:31:55.568206   75404 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0108 18:31:56.192534   75404 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0108 18:31:56.192675   75404 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/download-only-523000/config.json ...
	I0108 18:31:56.193068   75404 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0108 18:31:56.193995   75404 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/darwin/amd64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-523000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (42.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-523000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-523000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker : (42.077854656s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (42.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-523000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-523000: exit status 85 (295.633697ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-523000 | jenkins | v1.32.0 | 08 Jan 24 18:31 PST |          |
	|         | -p download-only-523000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-523000 | jenkins | v1.32.0 | 08 Jan 24 18:31 PST |          |
	|         | -p download-only-523000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-523000 | jenkins | v1.32.0 | 08 Jan 24 18:32 PST |          |
	|         | -p download-only-523000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 18:32:02
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 18:32:02.866533   75437 out.go:296] Setting OutFile to fd 1 ...
	I0108 18:32:02.866838   75437 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 18:32:02.866845   75437 out.go:309] Setting ErrFile to fd 2...
	I0108 18:32:02.866849   75437 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 18:32:02.867056   75437 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17866-74927/.minikube/bin
	W0108 18:32:02.867155   75437 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17866-74927/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17866-74927/.minikube/config/config.json: no such file or directory
	I0108 18:32:02.868449   75437 out.go:303] Setting JSON to true
	I0108 18:32:02.890919   75437 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":34294,"bootTime":1704733228,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0108 18:32:02.891043   75437 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0108 18:32:02.912486   75437 out.go:97] [download-only-523000] minikube v1.32.0 on Darwin 14.2.1
	I0108 18:32:02.933796   75437 out.go:169] MINIKUBE_LOCATION=17866
	I0108 18:32:02.912740   75437 notify.go:220] Checking for updates...
	I0108 18:32:02.977536   75437 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17866-74927/kubeconfig
	I0108 18:32:02.998830   75437 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 18:32:03.020936   75437 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 18:32:03.041645   75437 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube
	W0108 18:32:03.083784   75437 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 18:32:03.084610   75437 config.go:182] Loaded profile config "download-only-523000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W0108 18:32:03.084694   75437 start.go:810] api.Load failed for download-only-523000: filestore "download-only-523000": Docker machine "download-only-523000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 18:32:03.084862   75437 driver.go:392] Setting default libvirt URI to qemu:///system
	W0108 18:32:03.084914   75437 start.go:810] api.Load failed for download-only-523000: filestore "download-only-523000": Docker machine "download-only-523000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 18:32:03.141574   75437 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0108 18:32:03.141723   75437 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 18:32:03.243078   75437 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:62 SystemTime:2024-01-09 02:32:03.233656065 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0108 18:32:03.264405   75437 out.go:97] Using the docker driver based on existing profile
	I0108 18:32:03.264448   75437 start.go:298] selected driver: docker
	I0108 18:32:03.264465   75437 start.go:902] validating driver "docker" against &{Name:download-only-523000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:5885 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-523000 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 18:32:03.264777   75437 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 18:32:03.366241   75437 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:62 SystemTime:2024-01-09 02:32:03.357256935 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0108 18:32:03.369453   75437 cni.go:84] Creating CNI manager for ""
	I0108 18:32:03.369477   75437 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0108 18:32:03.369494   75437 start_flags.go:321] config:
	{Name:download-only-523000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:5885 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-523000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 18:32:03.391032   75437 out.go:97] Starting control plane node download-only-523000 in cluster download-only-523000
	I0108 18:32:03.391082   75437 cache.go:121] Beginning downloading kic base image for docker with docker
	I0108 18:32:03.411982   75437 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0108 18:32:03.412082   75437 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0108 18:32:03.412171   75437 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0108 18:32:03.463525   75437 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0108 18:32:03.463751   75437 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0108 18:32:03.463776   75437 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0108 18:32:03.463782   75437 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0108 18:32:03.463792   75437 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0108 18:32:03.467534   75437 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0108 18:32:03.467546   75437 cache.go:56] Caching tarball of preloaded images
	I0108 18:32:03.467979   75437 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0108 18:32:03.489172   75437 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0108 18:32:03.489199   75437 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0108 18:32:03.573257   75437 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:74b99cd9fa76659778caad266ad399ba -> /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0108 18:32:09.886685   75437 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0108 18:32:09.886948   75437 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0108 18:32:10.433367   75437 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0108 18:32:10.433471   75437 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/download-only-523000/config.json ...
	I0108 18:32:10.433994   75437 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0108 18:32:10.434789   75437 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/darwin/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-523000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.64s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.64s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-523000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.38s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.95s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-046000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-046000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-046000
--- PASS: TestDownloadOnlyKic (1.95s)

                                                
                                    
x
+
TestBinaryMirror (1.61s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-988000 --alsologtostderr --binary-mirror http://127.0.0.1:62201 --driver=docker 
helpers_test.go:175: Cleaning up "binary-mirror-988000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-988000
--- PASS: TestBinaryMirror (1.61s)

                                                
                                    
x
+
TestOffline (43.94s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-141000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-141000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (41.487776263s)
helpers_test.go:175: Cleaning up "offline-docker-141000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-141000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-141000: (2.452158277s)
--- PASS: TestOffline (43.94s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-388000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-388000: exit status 85 (193.00657ms)

                                                
                                                
-- stdout --
	* Profile "addons-388000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-388000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-388000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-388000: exit status 85 (213.408518ms)

                                                
                                                
-- stdout --
	* Profile "addons-388000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-388000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/Setup (153.72s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-388000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-darwin-amd64 start -p addons-388000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m33.72214855s)
--- PASS: TestAddons/Setup (153.72s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.83s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-z6psw" [a47affce-00e2-4a4d-a71e-141cd779c88f] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.007272434s
addons_test.go:841: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-388000
addons_test.go:841: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-388000: (5.819994767s)
--- PASS: TestAddons/parallel/InspektorGadget (10.83s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.82s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 4.165742ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-qztnj" [e751ea67-21c0-4c99-864f-0754f12783cd] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005030374s
addons_test.go:415: (dbg) Run:  kubectl --context addons-388000 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-darwin-amd64 -p addons-388000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.82s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.05s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.41275ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-scnw5" [27236aa4-7eed-4420-a6e3-fbf0e3e0f3f2] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005871592s
addons_test.go:473: (dbg) Run:  kubectl --context addons-388000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-388000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.308717626s)
addons_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 -p addons-388000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.05s)

                                                
                                    
x
+
TestAddons/parallel/CSI (74.37s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 16.980515ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-388000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-388000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [2ebfad9b-b899-410e-8e9c-e4d15b079758] Pending
helpers_test.go:344: "task-pv-pod" [2ebfad9b-b899-410e-8e9c-e4d15b079758] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [2ebfad9b-b899-410e-8e9c-e4d15b079758] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.006216428s
addons_test.go:584: (dbg) Run:  kubectl --context addons-388000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-388000 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-388000 delete pod task-pv-pod: (1.051365178s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-388000 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-388000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-388000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c587604f-6c37-4364-b5af-98da209ef134] Pending
helpers_test.go:344: "task-pv-pod-restore" [c587604f-6c37-4364-b5af-98da209ef134] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c587604f-6c37-4364-b5af-98da209ef134] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.006425306s
addons_test.go:626: (dbg) Run:  kubectl --context addons-388000 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-388000 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-388000 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-darwin-amd64 -p addons-388000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-darwin-amd64 -p addons-388000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.874168635s)
addons_test.go:642: (dbg) Run:  out/minikube-darwin-amd64 -p addons-388000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (74.37s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-388000 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-388000 --alsologtostderr -v=1: (1.412350523s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-m24z9" [41b1dd88-645e-40ee-af79-45ab3f161ef6] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-m24z9" [41b1dd88-645e-40ee-af79-45ab3f161ef6] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004893363s
--- PASS: TestAddons/parallel/Headlamp (13.42s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.67s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-rk2qf" [849dd001-25b5-44ec-b504-3d15dc3648d8] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003848901s
addons_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-388000
--- PASS: TestAddons/parallel/CloudSpanner (6.67s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.43s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-388000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-388000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [79e014dc-4606-418b-b915-7cf455313736] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [79e014dc-4606-418b-b915-7cf455313736] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [79e014dc-4606-418b-b915-7cf455313736] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.005016795s
addons_test.go:891: (dbg) Run:  kubectl --context addons-388000 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-darwin-amd64 -p addons-388000 ssh "cat /opt/local-path-provisioner/pvc-71afee91-d6c4-49d7-9780-0ac802fad3f8_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-388000 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-388000 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-darwin-amd64 -p addons-388000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-darwin-amd64 -p addons-388000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.288806314s)
--- PASS: TestAddons/parallel/LocalPath (54.43s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.05s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-q6lvv" [403e51e7-56b4-40cc-9ad2-e2c283af2406] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004529578s
addons_test.go:955: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-388000
addons_test.go:955: (dbg) Done: out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-388000: (1.045750965s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.05s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-h75wj" [687effda-a724-4dbd-8707-76586453250a] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00362063s
--- PASS: TestAddons/parallel/Yakd (5.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-388000 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-388000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.8s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-388000
addons_test.go:172: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-388000: (11.072087403s)
addons_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-388000
addons_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-388000
addons_test.go:185: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-388000
--- PASS: TestAddons/StoppedEnableDisable (11.80s)

                                                
                                    
x
+
TestCertOptions (24.35s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-746000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-746000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (21.116335562s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-746000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-746000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-746000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-746000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-746000: (2.406376652s)
--- PASS: TestCertOptions (24.35s)

                                                
                                    
x
+
TestCertExpiration (231.84s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-871000 --memory=2048 --cert-expiration=3m --driver=docker 
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-871000 --memory=2048 --cert-expiration=3m --driver=docker : (23.034290942s)
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-871000 --memory=2048 --cert-expiration=8760h --driver=docker 
E0108 19:12:07.566663   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/skaffold-709000/client.crt: no such file or directory
E0108 19:12:07.572535   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/skaffold-709000/client.crt: no such file or directory
E0108 19:12:07.582762   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/skaffold-709000/client.crt: no such file or directory
E0108 19:12:07.603750   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/skaffold-709000/client.crt: no such file or directory
E0108 19:12:07.644644   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/skaffold-709000/client.crt: no such file or directory
E0108 19:12:07.725496   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/skaffold-709000/client.crt: no such file or directory
E0108 19:12:07.886658   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/skaffold-709000/client.crt: no such file or directory
E0108 19:12:08.207583   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/skaffold-709000/client.crt: no such file or directory
E0108 19:12:08.848325   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/skaffold-709000/client.crt: no such file or directory
E0108 19:12:10.130070   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/skaffold-709000/client.crt: no such file or directory
E0108 19:12:12.690419   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/skaffold-709000/client.crt: no such file or directory
E0108 19:12:17.812242   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/skaffold-709000/client.crt: no such file or directory
E0108 19:12:28.053035   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/skaffold-709000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-871000 --memory=2048 --cert-expiration=8760h --driver=docker : (26.369276789s)
helpers_test.go:175: Cleaning up "cert-expiration-871000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-871000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-871000: (2.439592617s)
--- PASS: TestCertExpiration (231.84s)

                                                
                                    
x
+
TestDockerFlags (25.96s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-230000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
docker_test.go:51: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-230000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (22.58613974s)
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-230000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-230000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-230000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-230000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-230000: (2.527087446s)
--- PASS: TestDockerFlags (25.96s)

                                                
                                    
x
+
TestForceSystemdFlag (26.6s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-289000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:91: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-289000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (23.161000482s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-289000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-289000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-289000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-289000: (2.974044825s)
--- PASS: TestForceSystemdFlag (26.60s)

                                                
                                    
x
+
TestForceSystemdEnv (27.6s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-097000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
docker_test.go:155: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-097000 --memory=2048 --alsologtostderr -v=5 --driver=docker : (24.525508763s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-097000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-097000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-097000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-097000: (2.646281193s)
--- PASS: TestForceSystemdEnv (27.60s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.57s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.57s)

                                                
                                    
x
+
TestErrorSpam/setup (20.88s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-420000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-420000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-420000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-420000 --driver=docker : (20.881122432s)
--- PASS: TestErrorSpam/setup (20.88s)

                                                
                                    
x
+
TestErrorSpam/start (2.04s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-420000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-420000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-420000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-420000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-420000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-420000 start --dry-run
--- PASS: TestErrorSpam/start (2.04s)

                                                
                                    
x
+
TestErrorSpam/status (1.2s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-420000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-420000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-420000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-420000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-420000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-420000 status
--- PASS: TestErrorSpam/status (1.20s)

                                                
                                    
x
+
TestErrorSpam/pause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-420000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-420000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-420000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-420000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-420000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-420000 pause
--- PASS: TestErrorSpam/pause (1.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.71s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-420000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-420000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-420000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-420000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-420000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-420000 unpause
--- PASS: TestErrorSpam/unpause (1.71s)

                                                
                                    
x
+
TestErrorSpam/stop (2.83s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-420000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-420000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-420000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-420000 stop: (2.216219523s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-420000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-420000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-420000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-420000 stop
--- PASS: TestErrorSpam/stop (2.83s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/test/nested/copy/75369/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (36.61s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-142000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2233: (dbg) Done: out/minikube-darwin-amd64 start -p functional-142000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (36.609292873s)
--- PASS: TestFunctional/serial/StartWithProxy (36.61s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.96s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-142000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-142000 --alsologtostderr -v=8: (35.962876525s)
functional_test.go:659: soft start took 35.963345245s for "functional-142000" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.96s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-142000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-142000 cache add registry.k8s.io/pause:3.1: (1.250424837s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-142000 cache add registry.k8s.io/pause:3.3: (1.249746822s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-142000 cache add registry.k8s.io/pause:latest: (1.139752339s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-142000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1748795882/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 cache add minikube-local-cache-test:functional-142000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-142000 cache add minikube-local-cache-test:functional-142000: (1.091946094s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 cache delete minikube-local-cache-test:functional-142000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-142000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (395.108762ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 kubectl -- --context functional-142000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.56s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.78s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-142000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.78s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.65s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-142000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0108 18:40:24.133665   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0108 18:40:24.141711   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0108 18:40:24.151951   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0108 18:40:24.172639   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0108 18:40:24.212917   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0108 18:40:24.293688   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0108 18:40:24.454740   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0108 18:40:24.775147   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0108 18:40:25.416748   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0108 18:40:26.697153   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0108 18:40:29.257797   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-142000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.647009813s)
functional_test.go:757: restart took 36.647146355s for "functional-142000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.65s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-142000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.99s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-142000 logs: (2.986195125s)
--- PASS: TestFunctional/serial/LogsCmd (2.99s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.13s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd3276407661/001/logs.txt
E0108 18:40:34.377951   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-142000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd3276407661/001/logs.txt: (3.124756371s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.13s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.71s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-142000 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-142000
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-142000: exit status 115 (590.879172ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30718 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-142000 delete -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Done: kubectl --context functional-142000 delete -f testdata/invalidsvc.yaml: (1.916667153s)
--- PASS: TestFunctional/serial/InvalidService (5.71s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 config get cpus: exit status 14 (59.762875ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 config get cpus: exit status 14 (59.532232ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-142000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-142000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 77590: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.01s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-142000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-142000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (675.875402ms)

                                                
                                                
-- stdout --
	* [functional-142000] minikube v1.32.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17866
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17866-74927/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 18:42:13.704185   77493 out.go:296] Setting OutFile to fd 1 ...
	I0108 18:42:13.704481   77493 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 18:42:13.704491   77493 out.go:309] Setting ErrFile to fd 2...
	I0108 18:42:13.704496   77493 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 18:42:13.704695   77493 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17866-74927/.minikube/bin
	I0108 18:42:13.706073   77493 out.go:303] Setting JSON to false
	I0108 18:42:13.728337   77493 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":34905,"bootTime":1704733228,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0108 18:42:13.728444   77493 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0108 18:42:13.749772   77493 out.go:177] * [functional-142000] minikube v1.32.0 on Darwin 14.2.1
	I0108 18:42:13.792792   77493 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 18:42:13.792872   77493 notify.go:220] Checking for updates...
	I0108 18:42:13.835382   77493 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17866-74927/kubeconfig
	I0108 18:42:13.856557   77493 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 18:42:13.898262   77493 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 18:42:13.919511   77493 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube
	I0108 18:42:13.961327   77493 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 18:42:13.982752   77493 config.go:182] Loaded profile config "functional-142000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 18:42:13.983159   77493 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 18:42:14.040866   77493 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0108 18:42:14.041025   77493 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 18:42:14.148153   77493 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:68 SystemTime:2024-01-09 02:42:14.137370076 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0108 18:42:14.170605   77493 out.go:177] * Using the docker driver based on existing profile
	I0108 18:42:14.211540   77493 start.go:298] selected driver: docker
	I0108 18:42:14.211569   77493 start.go:902] validating driver "docker" against &{Name:functional-142000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-142000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 18:42:14.211722   77493 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 18:42:14.236701   77493 out.go:177] 
	W0108 18:42:14.257660   77493 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0108 18:42:14.278528   77493 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-142000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-142000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-142000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (634.910138ms)

                                                
                                                
-- stdout --
	* [functional-142000] minikube v1.32.0 sur Darwin 14.2.1
	  - MINIKUBE_LOCATION=17866
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17866-74927/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 18:42:15.078292   77551 out.go:296] Setting OutFile to fd 1 ...
	I0108 18:42:15.078582   77551 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 18:42:15.078587   77551 out.go:309] Setting ErrFile to fd 2...
	I0108 18:42:15.078591   77551 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 18:42:15.078808   77551 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17866-74927/.minikube/bin
	I0108 18:42:15.080518   77551 out.go:303] Setting JSON to false
	I0108 18:42:15.103514   77551 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":34907,"bootTime":1704733228,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0108 18:42:15.103628   77551 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0108 18:42:15.124967   77551 out.go:177] * [functional-142000] minikube v1.32.0 sur Darwin 14.2.1
	I0108 18:42:15.166838   77551 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 18:42:15.188836   77551 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17866-74927/kubeconfig
	I0108 18:42:15.166942   77551 notify.go:220] Checking for updates...
	I0108 18:42:15.230725   77551 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 18:42:15.251639   77551 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 18:42:15.272528   77551 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube
	I0108 18:42:15.293611   77551 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 18:42:15.315411   77551 config.go:182] Loaded profile config "functional-142000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 18:42:15.316149   77551 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 18:42:15.373550   77551 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
	I0108 18:42:15.373702   77551 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 18:42:15.474846   77551 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:68 SystemTime:2024-01-09 02:42:15.464988528 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
	I0108 18:42:15.533530   77551 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0108 18:42:15.554445   77551 start.go:298] selected driver: docker
	I0108 18:42:15.554462   77551 start.go:902] validating driver "docker" against &{Name:functional-142000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-142000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 18:42:15.554534   77551 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 18:42:15.578278   77551 out.go:177] 
	W0108 18:42:15.599377   77551 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0108 18:42:15.620530   77551 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6133dcb2-a440-42f8-9a93-370a50be92e0] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003745317s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-142000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-142000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-142000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-142000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d9614c3c-8958-4d54-8680-442fd086557b] Pending
helpers_test.go:344: "sp-pod" [d9614c3c-8958-4d54-8680-442fd086557b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0108 18:41:46.060471   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [d9614c3c-8958-4d54-8680-442fd086557b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.006902047s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-142000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-142000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-142000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3196f108-4424-4f3f-ace1-c928f5e2b597] Pending
helpers_test.go:344: "sp-pod" [3196f108-4424-4f3f-ace1-c928f5e2b597] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3196f108-4424-4f3f-ace1-c928f5e2b597] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.006983733s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-142000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.50s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh -n functional-142000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 cp functional-142000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelCpCmd1414689051/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh -n functional-142000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
E0108 18:40:44.618357   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh -n functional-142000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.80s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (32.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-142000 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-qbhd8" [7548ade2-6146-4851-a368-c832bb4c7be7] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-qbhd8" [7548ade2-6146-4851-a368-c832bb4c7be7] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 29.004714838s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-142000 exec mysql-859648c796-qbhd8 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-142000 exec mysql-859648c796-qbhd8 -- mysql -ppassword -e "show databases;": exit status 1 (116.142326ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-142000 exec mysql-859648c796-qbhd8 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-142000 exec mysql-859648c796-qbhd8 -- mysql -ppassword -e "show databases;": exit status 1 (113.866179ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-142000 exec mysql-859648c796-qbhd8 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (32.82s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/75369/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "sudo cat /etc/test/nested/copy/75369/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/75369.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "sudo cat /etc/ssl/certs/75369.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/75369.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "sudo cat /usr/share/ca-certificates/75369.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/753692.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "sudo cat /etc/ssl/certs/753692.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/753692.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "sudo cat /usr/share/ca-certificates/753692.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-142000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "sudo systemctl is-active crio"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 ssh "sudo systemctl is-active crio": exit status 1 (461.964614ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 version -o=json --components
functional_test.go:2269: (dbg) Done: out/minikube-darwin-amd64 -p functional-142000 version -o=json --components: (1.060690704s)
--- PASS: TestFunctional/parallel/Version/components (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-142000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-142000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-142000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-142000 image ls --format short --alsologtostderr:
I0108 18:42:27.867677   77686 out.go:296] Setting OutFile to fd 1 ...
I0108 18:42:27.876754   77686 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 18:42:27.876770   77686 out.go:309] Setting ErrFile to fd 2...
I0108 18:42:27.876776   77686 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 18:42:27.877039   77686 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17866-74927/.minikube/bin
I0108 18:42:27.877872   77686 config.go:182] Loaded profile config "functional-142000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 18:42:27.877994   77686 config.go:182] Loaded profile config "functional-142000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 18:42:27.878635   77686 cli_runner.go:164] Run: docker container inspect functional-142000 --format={{.State.Status}}
I0108 18:42:27.939334   77686 ssh_runner.go:195] Run: systemctl --version
I0108 18:42:27.939411   77686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-142000
I0108 18:42:27.998996   77686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62888 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/functional-142000/id_rsa Username:docker}
I0108 18:42:28.093840   77686 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-142000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| docker.io/library/nginx                     | latest            | d453dd892d935 | 187MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| docker.io/localhost/my-image                | functional-142000 | f4a77d5d585fc | 1.24MB |
| docker.io/library/nginx                     | alpine            | 529b5644c430c | 42.6MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-142000 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/minikube-local-cache-test | functional-142000 | 4ba4750646ce0 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-142000 image ls --format table --alsologtostderr:
I0108 18:42:31.513612   77764 out.go:296] Setting OutFile to fd 1 ...
I0108 18:42:31.513842   77764 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 18:42:31.513848   77764 out.go:309] Setting ErrFile to fd 2...
I0108 18:42:31.513852   77764 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 18:42:31.514044   77764 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17866-74927/.minikube/bin
I0108 18:42:31.514648   77764 config.go:182] Loaded profile config "functional-142000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 18:42:31.514740   77764 config.go:182] Loaded profile config "functional-142000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 18:42:31.515137   77764 cli_runner.go:164] Run: docker container inspect functional-142000 --format={{.State.Status}}
I0108 18:42:31.566102   77764 ssh_runner.go:195] Run: systemctl --version
I0108 18:42:31.566182   77764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-142000
I0108 18:42:31.618749   77764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62888 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/functional-142000/id_rsa Username:docker}
I0108 18:42:31.709568   77764 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-142000 image ls --format json --alsologtostderr:
[{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"f4a77d5d585fc378c33411ea4a4869fba066a95d1a51d8965e7a3f3eca279351","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-142000"],"size":"1240000"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"d453dd892d9357f3559b967478ae9cbc4
17b52de66b53142f6c16c8a275486b9","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"s
ize":"744000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-142000"],"size":"32900000"},{"id":"4ba4750646ce0105a74c0996685e6734df742f8542d3c1e5a92295390ebc2dc1","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-142000"],"size":"30"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003c
none\u003e"],"size":"246000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-142000 image ls --format json --alsologtostderr:
I0108 18:42:31.216685   77758 out.go:296] Setting OutFile to fd 1 ...
I0108 18:42:31.216989   77758 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 18:42:31.216994   77758 out.go:309] Setting ErrFile to fd 2...
I0108 18:42:31.216998   77758 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 18:42:31.217183   77758 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17866-74927/.minikube/bin
I0108 18:42:31.217785   77758 config.go:182] Loaded profile config "functional-142000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 18:42:31.217904   77758 config.go:182] Loaded profile config "functional-142000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 18:42:31.218377   77758 cli_runner.go:164] Run: docker container inspect functional-142000 --format={{.State.Status}}
I0108 18:42:31.269311   77758 ssh_runner.go:195] Run: systemctl --version
I0108 18:42:31.269390   77758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-142000
I0108 18:42:31.320226   77758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62888 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/functional-142000/id_rsa Username:docker}
I0108 18:42:31.409298   77758 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-142000 image ls --format yaml --alsologtostderr:
- id: d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 4ba4750646ce0105a74c0996685e6734df742f8542d3c1e5a92295390ebc2dc1
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-142000
size: "30"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-142000
size: "32900000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: 529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-142000 image ls --format yaml --alsologtostderr:
I0108 18:42:28.395589   77707 out.go:296] Setting OutFile to fd 1 ...
I0108 18:42:28.395891   77707 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 18:42:28.395897   77707 out.go:309] Setting ErrFile to fd 2...
I0108 18:42:28.395901   77707 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 18:42:28.396097   77707 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17866-74927/.minikube/bin
I0108 18:42:28.396732   77707 config.go:182] Loaded profile config "functional-142000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 18:42:28.396824   77707 config.go:182] Loaded profile config "functional-142000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 18:42:28.397237   77707 cli_runner.go:164] Run: docker container inspect functional-142000 --format={{.State.Status}}
I0108 18:42:28.449719   77707 ssh_runner.go:195] Run: systemctl --version
I0108 18:42:28.449793   77707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-142000
I0108 18:42:28.501246   77707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62888 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/functional-142000/id_rsa Username:docker}
I0108 18:42:28.593010   77707 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 ssh pgrep buildkitd: exit status 1 (368.214034ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image build -t localhost/my-image:functional-142000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-142000 image build -t localhost/my-image:functional-142000 testdata/build --alsologtostderr: (1.852282356s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-142000 image build -t localhost/my-image:functional-142000 testdata/build --alsologtostderr:
I0108 18:42:29.066899   77735 out.go:296] Setting OutFile to fd 1 ...
I0108 18:42:29.067923   77735 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 18:42:29.067930   77735 out.go:309] Setting ErrFile to fd 2...
I0108 18:42:29.067935   77735 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 18:42:29.068134   77735 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17866-74927/.minikube/bin
I0108 18:42:29.068729   77735 config.go:182] Loaded profile config "functional-142000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 18:42:29.069407   77735 config.go:182] Loaded profile config "functional-142000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 18:42:29.069826   77735 cli_runner.go:164] Run: docker container inspect functional-142000 --format={{.State.Status}}
I0108 18:42:29.120012   77735 ssh_runner.go:195] Run: systemctl --version
I0108 18:42:29.120085   77735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-142000
I0108 18:42:29.171482   77735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62888 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/functional-142000/id_rsa Username:docker}
I0108 18:42:29.263409   77735 build_images.go:151] Building image from path: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.3103588297.tar
I0108 18:42:29.263511   77735 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0108 18:42:29.271761   77735 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3103588297.tar
I0108 18:42:29.275767   77735 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3103588297.tar: stat -c "%s %y" /var/lib/minikube/build/build.3103588297.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3103588297.tar': No such file or directory
I0108 18:42:29.275807   77735 ssh_runner.go:362] scp /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.3103588297.tar --> /var/lib/minikube/build/build.3103588297.tar (3072 bytes)
I0108 18:42:29.295956   77735 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3103588297
I0108 18:42:29.304383   77735 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3103588297 -xf /var/lib/minikube/build/build.3103588297.tar
I0108 18:42:29.313490   77735 docker.go:346] Building image: /var/lib/minikube/build/build.3103588297
I0108 18:42:29.313568   77735 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-142000 /var/lib/minikube/build/build.3103588297
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 0.9s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:f4a77d5d585fc378c33411ea4a4869fba066a95d1a51d8965e7a3f3eca279351 done
#8 naming to localhost/my-image:functional-142000 done
#8 DONE 0.0s
I0108 18:42:30.818157   77735 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-142000 /var/lib/minikube/build/build.3103588297: (1.50458743s)
I0108 18:42:30.818230   77735 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3103588297
I0108 18:42:30.826716   77735 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3103588297.tar
I0108 18:42:30.834803   77735 build_images.go:207] Built localhost/my-image:functional-142000 from /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.3103588297.tar
I0108 18:42:30.834833   77735 build_images.go:123] succeeded building to: functional-142000
I0108 18:42:30.834838   77735 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.396439124s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-142000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-142000 docker-env) && out/minikube-darwin-amd64 status -p functional-142000"
functional_test.go:495: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-142000 docker-env) && out/minikube-darwin-amd64 status -p functional-142000": (1.32323837s)
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-142000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image load --daemon gcr.io/google-containers/addon-resizer:functional-142000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-142000 image load --daemon gcr.io/google-containers/addon-resizer:functional-142000 --alsologtostderr: (3.614011908s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.94s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image load --daemon gcr.io/google-containers/addon-resizer:functional-142000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-142000 image load --daemon gcr.io/google-containers/addon-resizer:functional-142000 --alsologtostderr: (2.119983553s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.245428772s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-142000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image load --daemon gcr.io/google-containers/addon-resizer:functional-142000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-142000 image load --daemon gcr.io/google-containers/addon-resizer:functional-142000 --alsologtostderr: (3.976682113s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image save gcr.io/google-containers/addon-resizer:functional-142000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-142000 image save gcr.io/google-containers/addon-resizer:functional-142000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.423828902s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image rm gcr.io/google-containers/addon-resizer:functional-142000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-142000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (2.703465999s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-142000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image save --daemon gcr.io/google-containers/addon-resizer:functional-142000 --alsologtostderr
E0108 18:41:05.098678   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-142000 image save --daemon gcr.io/google-containers/addon-resizer:functional-142000 --alsologtostderr: (1.557091413s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-142000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (19.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-142000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-142000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-8krhp" [5b1f45fd-b419-4def-b715-1c64a625c309] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-8krhp" [5b1f45fd-b419-4def-b715-1c64a625c309] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 19.00635893s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (19.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-142000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-142000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-142000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-142000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 77261: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-142000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-142000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [34c0299d-f26f-462a-8c1f-88747aaebb59] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [34c0299d-f26f-462a-8c1f-88747aaebb59] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.005298153s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 service list -o json
functional_test.go:1493: Took "608.518263ms" to run "out/minikube-darwin-amd64 -p functional-142000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 service --namespace=default --https --url hello-node
functional_test.go:1508: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 service --namespace=default --https --url hello-node: signal: killed (15.005743236s)

                                                
                                                
-- stdout --
	https://127.0.0.1:63167

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1521: found endpoint: https://127.0.0.1:63167
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-142000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-142000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 77291: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 service hello-node --url --format={{.IP}}
functional_test.go:1539: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 service hello-node --url --format={{.IP}}: signal: killed (15.002595172s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 service hello-node --url
functional_test.go:1558: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 service hello-node --url: signal: killed (15.003390024s)

                                                
                                                
-- stdout --
	http://127.0.0.1:63210

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1564: found endpoint for hello-node: http://127.0.0.1:63210
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1314: Took "513.315615ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1328: Took "80.509439ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1365: Took "409.784352ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1378: Took "81.133911ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-142000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup555470218/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-142000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup555470218/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-142000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup555470218/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T" /mount1: exit status 1 (481.070117ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-142000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-142000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup555470218/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-142000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup555470218/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-142000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup555470218/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.51s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-142000
--- PASS: TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-142000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-142000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (21.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-233000 --driver=docker 
E0108 18:43:07.982090   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-233000 --driver=docker : (21.772948851s)
--- PASS: TestImageBuild/serial/Setup (21.77s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-233000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-233000: (1.768266085s)
--- PASS: TestImageBuild/serial/NormalBuild (1.77s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-233000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.94s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.74s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-233000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.74s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.76s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-233000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.76s)

                                                
                                    
x
+
TestJSONOutput/start/Command (36.81s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-727000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E0108 18:50:50.514429   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
E0108 18:51:18.208389   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-727000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (36.806346031s)
--- PASS: TestJSONOutput/start/Command (36.81s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-727000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-727000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.98s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-727000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-727000 --output=json --user=testUser: (10.97703222s)
--- PASS: TestJSONOutput/stop/Command (10.98s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.76s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-263000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-263000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (383.939209ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"06d32ae3-a9b8-4f89-89c0-d7dbc7728e1b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-263000] minikube v1.32.0 on Darwin 14.2.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"80dafaa6-798f-493e-9806-2f84731323db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17866"}}
	{"specversion":"1.0","id":"dc2f6c76-55f4-4503-afcf-8c18f99e9cb7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17866-74927/kubeconfig"}}
	{"specversion":"1.0","id":"06ed79e2-d103-4065-9202-77ee7f193096","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"7c9a30a4-395b-45b6-b603-3674c39574a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"47a8f850-4a67-4949-bc98-86062e916938","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube"}}
	{"specversion":"1.0","id":"dca9f82f-576d-4cbe-9b12-456383c164cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"70e27de1-ae16-451f-887a-c65bcb900d19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-263000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-263000
--- PASS: TestErrorJSONOutput (0.76s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (23.61s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-453000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-453000 --network=: (21.155946834s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-453000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-453000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-453000: (2.400699575s)
--- PASS: TestKicCustomNetwork/create_custom_network (23.61s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.87s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-275000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-275000 --network=bridge: (20.566187304s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-275000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-275000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-275000: (2.252577925s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.87s)

                                                
                                    
x
+
TestKicExistingNetwork (23.34s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-673000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-673000 --network=existing-network: (20.744748775s)
helpers_test.go:175: Cleaning up "existing-network-673000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-673000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-673000: (2.251099422s)
--- PASS: TestKicExistingNetwork (23.34s)

                                                
                                    
x
+
TestKicCustomSubnet (23.34s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-923000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-923000 --subnet=192.168.60.0/24: (20.807435035s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-923000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-923000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-923000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-923000: (2.476670351s)
--- PASS: TestKicCustomSubnet (23.34s)

                                                
                                    
x
+
TestKicStaticIP (23.84s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-025000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-025000 --static-ip=192.168.200.200: (21.208558731s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-025000 ip
helpers_test.go:175: Cleaning up "static-ip-025000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-025000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-025000: (2.397237793s)
--- PASS: TestKicStaticIP (23.84s)

                                                
                                    
x
+
TestMainNoArgs (0.1s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.10s)

                                                
                                    
x
+
TestMinikubeProfile (49.54s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-875000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-875000 --driver=docker : (20.878752742s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-878000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-878000 --driver=docker : (22.143236934s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-875000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-878000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-878000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-878000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-878000: (2.415783208s)
helpers_test.go:175: Cleaning up "first-875000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-875000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-875000: (2.456237989s)
--- PASS: TestMinikubeProfile (49.54s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.39s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-136000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-136000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.394498299s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-136000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-151000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-151000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.27435159s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-151000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.17s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-136000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-136000 --alsologtostderr -v=5: (2.172227431s)
--- PASS: TestMountStart/serial/DeleteFirst (2.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-151000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.56s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-151000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-151000: (1.562195405s)
--- PASS: TestMountStart/serial/Stop (1.56s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.24s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-151000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-151000: (7.242244526s)
--- PASS: TestMountStart/serial/RestartStopped (8.24s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-151000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (62.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-500000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0108 18:55:24.101194   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0108 18:55:50.488199   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-500000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m2.154501739s)
multinode_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (62.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (39.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-500000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-500000 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-500000 -- rollout status deployment/busybox: (2.912455031s)
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-500000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-500000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-500000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-500000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-500000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-500000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-500000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-500000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-500000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-500000 -- exec busybox-5bc68d56bd-r8wpp -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-500000 -- exec busybox-5bc68d56bd-xkcmp -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-500000 -- exec busybox-5bc68d56bd-r8wpp -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-500000 -- exec busybox-5bc68d56bd-xkcmp -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-500000 -- exec busybox-5bc68d56bd-r8wpp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-500000 -- exec busybox-5bc68d56bd-xkcmp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (39.04s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-500000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-500000 -- exec busybox-5bc68d56bd-r8wpp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-500000 -- exec busybox-5bc68d56bd-r8wpp -- sh -c "ping -c 1 192.168.65.254"
multinode_test.go:588: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-500000 -- exec busybox-5bc68d56bd-xkcmp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-500000 -- exec busybox-5bc68d56bd-xkcmp -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (15.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-500000 -v 3 --alsologtostderr
E0108 18:56:47.151439   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-500000 -v 3 --alsologtostderr: (14.327593148s)
multinode_test.go:117: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 status --alsologtostderr
multinode_test.go:117: (dbg) Done: out/minikube-darwin-amd64 -p multinode-500000 status --alsologtostderr: (1.025130955s)
--- PASS: TestMultiNode/serial/AddNode (15.35s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-500000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.42s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (13.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 cp testdata/cp-test.txt multinode-500000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 ssh -n multinode-500000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 cp multinode-500000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile220735836/001/cp-test_multinode-500000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 ssh -n multinode-500000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 cp multinode-500000:/home/docker/cp-test.txt multinode-500000-m02:/home/docker/cp-test_multinode-500000_multinode-500000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 ssh -n multinode-500000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 ssh -n multinode-500000-m02 "sudo cat /home/docker/cp-test_multinode-500000_multinode-500000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 cp multinode-500000:/home/docker/cp-test.txt multinode-500000-m03:/home/docker/cp-test_multinode-500000_multinode-500000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 ssh -n multinode-500000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 ssh -n multinode-500000-m03 "sudo cat /home/docker/cp-test_multinode-500000_multinode-500000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 cp testdata/cp-test.txt multinode-500000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 ssh -n multinode-500000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 cp multinode-500000-m02:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile220735836/001/cp-test_multinode-500000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 ssh -n multinode-500000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 cp multinode-500000-m02:/home/docker/cp-test.txt multinode-500000:/home/docker/cp-test_multinode-500000-m02_multinode-500000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 ssh -n multinode-500000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 ssh -n multinode-500000 "sudo cat /home/docker/cp-test_multinode-500000-m02_multinode-500000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 cp multinode-500000-m02:/home/docker/cp-test.txt multinode-500000-m03:/home/docker/cp-test_multinode-500000-m02_multinode-500000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 ssh -n multinode-500000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 ssh -n multinode-500000-m03 "sudo cat /home/docker/cp-test_multinode-500000-m02_multinode-500000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 cp testdata/cp-test.txt multinode-500000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 ssh -n multinode-500000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 cp multinode-500000-m03:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile220735836/001/cp-test_multinode-500000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 ssh -n multinode-500000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 cp multinode-500000-m03:/home/docker/cp-test.txt multinode-500000:/home/docker/cp-test_multinode-500000-m03_multinode-500000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 ssh -n multinode-500000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 ssh -n multinode-500000 "sudo cat /home/docker/cp-test_multinode-500000-m03_multinode-500000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 cp multinode-500000-m03:/home/docker/cp-test.txt multinode-500000-m02:/home/docker/cp-test_multinode-500000-m03_multinode-500000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 ssh -n multinode-500000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 ssh -n multinode-500000-m02 "sudo cat /home/docker/cp-test_multinode-500000-m03_multinode-500000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (13.84s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-darwin-amd64 -p multinode-500000 node stop m03: (1.502652763s)
multinode_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-500000 status: exit status 7 (713.622061ms)

                                                
                                                
-- stdout --
	multinode-500000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-500000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-500000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-500000 status --alsologtostderr: exit status 7 (709.210568ms)

                                                
                                                
-- stdout --
	multinode-500000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-500000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-500000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 18:57:12.464329   80850 out.go:296] Setting OutFile to fd 1 ...
	I0108 18:57:12.464639   80850 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 18:57:12.464645   80850 out.go:309] Setting ErrFile to fd 2...
	I0108 18:57:12.464649   80850 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 18:57:12.464833   80850 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17866-74927/.minikube/bin
	I0108 18:57:12.465009   80850 out.go:303] Setting JSON to false
	I0108 18:57:12.465032   80850 mustload.go:65] Loading cluster: multinode-500000
	I0108 18:57:12.465067   80850 notify.go:220] Checking for updates...
	I0108 18:57:12.465347   80850 config.go:182] Loaded profile config "multinode-500000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 18:57:12.465359   80850 status.go:255] checking status of multinode-500000 ...
	I0108 18:57:12.465766   80850 cli_runner.go:164] Run: docker container inspect multinode-500000 --format={{.State.Status}}
	I0108 18:57:12.517496   80850 status.go:330] multinode-500000 host status = "Running" (err=<nil>)
	I0108 18:57:12.517529   80850 host.go:66] Checking if "multinode-500000" exists ...
	I0108 18:57:12.517769   80850 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-500000
	I0108 18:57:12.570103   80850 host.go:66] Checking if "multinode-500000" exists ...
	I0108 18:57:12.570392   80850 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 18:57:12.570465   80850 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-500000
	I0108 18:57:12.621286   80850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63706 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/multinode-500000/id_rsa Username:docker}
	I0108 18:57:12.712415   80850 ssh_runner.go:195] Run: systemctl --version
	I0108 18:57:12.717133   80850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 18:57:12.727060   80850 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-500000
	I0108 18:57:12.779286   80850 kubeconfig.go:92] found "multinode-500000" server: "https://127.0.0.1:63710"
	I0108 18:57:12.779316   80850 api_server.go:166] Checking apiserver status ...
	I0108 18:57:12.779358   80850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 18:57:12.789829   80850 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2257/cgroup
	W0108 18:57:12.798460   80850 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2257/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0108 18:57:12.798514   80850 ssh_runner.go:195] Run: ls
	I0108 18:57:12.802750   80850 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:63710/healthz ...
	I0108 18:57:12.807672   80850 api_server.go:279] https://127.0.0.1:63710/healthz returned 200:
	ok
	I0108 18:57:12.807693   80850 status.go:421] multinode-500000 apiserver status = Running (err=<nil>)
	I0108 18:57:12.807705   80850 status.go:257] multinode-500000 status: &{Name:multinode-500000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0108 18:57:12.807716   80850 status.go:255] checking status of multinode-500000-m02 ...
	I0108 18:57:12.807941   80850 cli_runner.go:164] Run: docker container inspect multinode-500000-m02 --format={{.State.Status}}
	I0108 18:57:12.859457   80850 status.go:330] multinode-500000-m02 host status = "Running" (err=<nil>)
	I0108 18:57:12.859483   80850 host.go:66] Checking if "multinode-500000-m02" exists ...
	I0108 18:57:12.859776   80850 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-500000-m02
	I0108 18:57:12.910867   80850 host.go:66] Checking if "multinode-500000-m02" exists ...
	I0108 18:57:12.911119   80850 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 18:57:12.911176   80850 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-500000-m02
	I0108 18:57:12.962194   80850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63748 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/multinode-500000-m02/id_rsa Username:docker}
	I0108 18:57:13.052714   80850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 18:57:13.062678   80850 status.go:257] multinode-500000-m02 status: &{Name:multinode-500000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0108 18:57:13.062705   80850 status.go:255] checking status of multinode-500000-m03 ...
	I0108 18:57:13.062997   80850 cli_runner.go:164] Run: docker container inspect multinode-500000-m03 --format={{.State.Status}}
	I0108 18:57:13.114470   80850 status.go:330] multinode-500000-m03 host status = "Stopped" (err=<nil>)
	I0108 18:57:13.114495   80850 status.go:343] host is not running, skipping remaining checks
	I0108 18:57:13.114504   80850 status.go:257] multinode-500000-m03 status: &{Name:multinode-500000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.93s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (13.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-500000 node start m03 --alsologtostderr: (12.777760227s)
multinode_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (13.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (116.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-500000
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-500000
multinode_test.go:318: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-500000: (23.047879587s)
multinode_test.go:323: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-500000 --wait=true -v=8 --alsologtostderr
multinode_test.go:323: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-500000 --wait=true -v=8 --alsologtostderr: (1m33.292895604s)
multinode_test.go:328: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-500000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (116.47s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-darwin-amd64 -p multinode-500000 node delete m03: (5.052249797s)
multinode_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.87s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 stop
multinode_test.go:342: (dbg) Done: out/minikube-darwin-amd64 -p multinode-500000 stop: (21.593127046s)
multinode_test.go:348: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-500000 status: exit status 7 (164.115038ms)

                                                
                                                
-- stdout --
	multinode-500000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-500000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-500000 status --alsologtostderr: exit status 7 (162.935125ms)

                                                
                                                
-- stdout --
	multinode-500000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-500000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 18:59:51.078737   81289 out.go:296] Setting OutFile to fd 1 ...
	I0108 18:59:51.079041   81289 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 18:59:51.079046   81289 out.go:309] Setting ErrFile to fd 2...
	I0108 18:59:51.079050   81289 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 18:59:51.079241   81289 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17866-74927/.minikube/bin
	I0108 18:59:51.079449   81289 out.go:303] Setting JSON to false
	I0108 18:59:51.079472   81289 mustload.go:65] Loading cluster: multinode-500000
	I0108 18:59:51.079504   81289 notify.go:220] Checking for updates...
	I0108 18:59:51.079796   81289 config.go:182] Loaded profile config "multinode-500000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 18:59:51.079808   81289 status.go:255] checking status of multinode-500000 ...
	I0108 18:59:51.080216   81289 cli_runner.go:164] Run: docker container inspect multinode-500000 --format={{.State.Status}}
	I0108 18:59:51.132465   81289 status.go:330] multinode-500000 host status = "Stopped" (err=<nil>)
	I0108 18:59:51.132488   81289 status.go:343] host is not running, skipping remaining checks
	I0108 18:59:51.132494   81289 status.go:257] multinode-500000 status: &{Name:multinode-500000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0108 18:59:51.132517   81289 status.go:255] checking status of multinode-500000-m02 ...
	I0108 18:59:51.132750   81289 cli_runner.go:164] Run: docker container inspect multinode-500000-m02 --format={{.State.Status}}
	I0108 18:59:51.183631   81289 status.go:330] multinode-500000-m02 host status = "Stopped" (err=<nil>)
	I0108 18:59:51.183657   81289 status.go:343] host is not running, skipping remaining checks
	I0108 18:59:51.183665   81289 status.go:257] multinode-500000-m02 status: &{Name:multinode-500000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.92s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (59.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-500000 --wait=true -v=8 --alsologtostderr --driver=docker 
E0108 19:00:24.093674   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-500000 --wait=true -v=8 --alsologtostderr --driver=docker : (58.887889928s)
multinode_test.go:388: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-500000 status --alsologtostderr
E0108 19:00:50.479873   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (59.75s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-500000
multinode_test.go:480: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-500000-m02 --driver=docker 
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-500000-m02 --driver=docker : exit status 14 (467.348956ms)

                                                
                                                
-- stdout --
	* [multinode-500000-m02] minikube v1.32.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17866
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17866-74927/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-500000-m02' is duplicated with machine name 'multinode-500000-m02' in profile 'multinode-500000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-500000-m03 --driver=docker 
multinode_test.go:488: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-500000-m03 --driver=docker : (21.67763007s)
multinode_test.go:495: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-500000
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-500000: exit status 80 (466.805728ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-500000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-500000-m03 already exists in multinode-500000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-500000-m03
multinode_test.go:500: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-500000-m03: (2.411185114s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.08s)

                                                
                                    
x
+
TestPreload (144.02s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-902000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
E0108 19:02:13.531349   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-902000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m17.925110585s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-902000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-902000 image pull gcr.io/k8s-minikube/busybox: (1.411362721s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-902000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-902000: (10.764487701s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-902000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-902000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (51.127913464s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-902000 image list
helpers_test.go:175: Cleaning up "test-preload-902000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-902000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-902000: (2.485248986s)
--- PASS: TestPreload (144.02s)

                                                
                                    
x
+
TestScheduledStopUnix (94.33s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-290000 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-290000 --memory=2048 --driver=docker : (20.269856497s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-290000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-290000 -n scheduled-stop-290000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-290000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-290000 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-290000 -n scheduled-stop-290000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-290000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-290000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-290000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-290000: exit status 7 (118.139282ms)

                                                
                                                
-- stdout --
	scheduled-stop-290000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-290000 -n scheduled-stop-290000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-290000 -n scheduled-stop-290000: exit status 7 (111.005152ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-290000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-290000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-290000: (2.169868051s)
--- PASS: TestScheduledStopUnix (94.33s)

                                                
                                    
x
+
TestSkaffold (122.6s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe1584897475 version
skaffold_test.go:63: skaffold version: v2.9.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-709000 --memory=2600 --driver=docker 
E0108 19:05:24.085708   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-709000 --memory=2600 --driver=docker : (21.051044939s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe1584897475 run --minikube-profile skaffold-709000 --kube-context skaffold-709000 --status-check=true --port-forward=false --interactive=false
E0108 19:05:50.471440   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe1584897475 run --minikube-profile skaffold-709000 --kube-context skaffold-709000 --status-check=true --port-forward=false --interactive=false: (1m25.361972672s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-65b7764fcf-qkt97" [c1a96765-5345-45ac-bfb4-181cf76c8007] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003730728s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-589d8b5657-pkg25" [2b4ecfc0-781a-44b8-96e2-61bb7620412f] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004641993s
helpers_test.go:175: Cleaning up "skaffold-709000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-709000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-709000: (2.968814874s)
--- PASS: TestSkaffold (122.60s)

                                                
                                    
x
+
TestInsufficientStorage (9.99s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-590000 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-590000 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (7.027022915s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d6316e29-c928-49bb-8db5-d73061e59367","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-590000] minikube v1.32.0 on Darwin 14.2.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8c37b3e2-6415-4afb-8392-1662d1536d37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17866"}}
	{"specversion":"1.0","id":"e5bff0ef-bde6-4458-8da7-af700aa34cd1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17866-74927/kubeconfig"}}
	{"specversion":"1.0","id":"1a6edd85-287a-4142-a853-1132449f412f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"93e66f3d-139a-4a59-a128-0a598c42577b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e62992d7-463f-42cd-a820-3dbd76d0d51f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube"}}
	{"specversion":"1.0","id":"c84840f0-a9b7-49bc-9bbb-78fe87d20577","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"465b106f-e925-463e-91cb-2b5f4a7039ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"a6d00000-0165-4473-a9dd-cae746659700","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"e679d1d3-9700-424a-b248-94c09a11b95f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"acd489ab-10f7-42b4-a7b9-b41924065a98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"271c2770-1917-4d40-a585-50b575d25b3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-590000 in cluster insufficient-storage-590000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"459bd11e-bb94-4420-a9db-7ea402de2773","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1704759386-17866 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"0230c209-2689-4068-a5fa-68c948b38505","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"24fb39d5-96ff-46ce-a092-b2b67efe996e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-590000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-590000 --output=json --layout=cluster: exit status 7 (376.390553ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-590000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-590000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 19:07:28.946117   82577 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-590000" does not appear in /Users/jenkins/minikube-integration/17866-74927/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-590000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-590000 --output=json --layout=cluster: exit status 7 (373.078498ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-590000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-590000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 19:07:29.319911   82587 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-590000" does not appear in /Users/jenkins/minikube-integration/17866-74927/kubeconfig
	E0108 19:07:29.329201   82587 status.go:559] unable to read event log: stat: stat /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/insufficient-storage-590000/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-590000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-590000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-590000: (2.211080961s)
--- PASS: TestInsufficientStorage (9.99s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (10.1s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=17866
- KUBECONFIG=/Users/jenkins/minikube-integration/17866-74927/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2367532401/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2367532401/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2367532401/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2367532401/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (10.10s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (12.51s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=17866
- KUBECONFIG=/Users/jenkins/minikube-integration/17866-74927/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3157005165/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3157005165/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3157005165/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3157005165/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (12.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-702000
version_upgrade_test.go:219: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-702000: (3.433229789s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.43s)

                                                
                                    
x
+
TestPause/serial/Start (37.28s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-622000 --memory=2048 --install-addons=false --wait=all --driver=docker 
E0108 19:13:27.125977   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0108 19:13:29.492375   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/skaffold-709000/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-622000 --memory=2048 --install-addons=false --wait=all --driver=docker : (37.277756548s)
--- PASS: TestPause/serial/Start (37.28s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (39.96s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-622000 --alsologtostderr -v=1 --driver=docker 
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-622000 --alsologtostderr -v=1 --driver=docker : (39.941389074s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (39.96s)

                                                
                                    
x
+
TestPause/serial/Pause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-622000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.69s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-622000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-622000 --output=json --layout=cluster: exit status 2 (395.295801ms)

                                                
                                                
-- stdout --
	{"Name":"pause-622000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-622000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.40s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-622000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.69s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.75s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-622000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.75s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.46s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-622000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-622000 --alsologtostderr -v=5: (2.45885215s)
--- PASS: TestPause/serial/DeletePaused (2.46s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.54s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-622000
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-622000: exit status 1 (51.06972ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-622000: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-680000 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-680000 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (362.218307ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-680000] minikube v1.32.0 on Darwin 14.2.1
	  - MINIKUBE_LOCATION=17866
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17866-74927/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (22.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-680000 --driver=docker 
E0108 19:14:51.410566   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/skaffold-709000/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-680000 --driver=docker : (21.633250371s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-680000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (22.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-680000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-680000 --no-kubernetes --driver=docker : (14.275359347s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-680000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-680000 status -o json: exit status 2 (387.903372ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-680000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-680000
E0108 19:15:24.070202   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-680000: (2.210340421s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-680000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-680000 --no-kubernetes --driver=docker : (7.327593358s)
--- PASS: TestNoKubernetes/serial/Start (7.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-680000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-680000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (364.057746ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-680000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-680000: (1.530452477s)
--- PASS: TestNoKubernetes/serial/Stop (1.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-680000 --driver=docker 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-680000 --driver=docker : (8.478813333s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-680000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-680000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (404.75719ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (76.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-798000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker 
E0108 19:15:50.456273   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p auto-798000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker : (1m16.134473299s)
--- PASS: TestNetworkPlugins/group/auto/Start (76.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (51.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-798000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-798000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker : (51.508023455s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (51.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-798000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-798000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9gr59" [85f320bb-6055-455c-af6a-f8e8228fe562] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0108 19:17:07.640070   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/skaffold-709000/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-9gr59" [85f320bb-6055-455c-af6a-f8e8228fe562] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.006092141s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-798000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-798000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-798000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-hh86d" [2aaa94f7-af94-4e11-b5b2-66ae8f12563d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004728114s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-798000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-798000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mgs8n" [66784ce8-164c-4003-aa02-31829e49f063] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-mgs8n" [66784ce8-164c-4003-aa02-31829e49f063] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.007175482s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (74.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-798000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p calico-798000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker : (1m14.87058013s)
--- PASS: TestNetworkPlugins/group/calico/Start (74.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-798000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-798000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-798000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (53.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-798000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-798000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker : (53.850035193s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (53.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-sl6qv" [7e32df53-ef2f-438b-ba5d-a2949d750ffe] Running
E0108 19:18:53.588085   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00416518s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-798000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-798000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-24zbg" [b5ec95a0-5e23-47d4-ab50-0a241818051a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-24zbg" [b5ec95a0-5e23-47d4-ab50-0a241818051a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004674547s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-798000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-798000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zpkcn" [d962e375-965e-48f9-9194-b8d946143a8a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zpkcn" [d962e375-965e-48f9-9194-b8d946143a8a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005200398s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-798000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-798000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-798000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-798000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-798000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-798000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (38.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-798000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p false-798000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker : (38.526673696s)
--- PASS: TestNetworkPlugins/group/false/Start (38.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (38.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-798000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-798000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker : (38.266961238s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (38.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-798000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-798000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lgvrv" [bf53f06f-0fa1-4086-8bc3-bd88a317a03c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lgvrv" [bf53f06f-0fa1-4086-8bc3-bd88a317a03c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.003878542s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-798000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-798000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4rshf" [dfac001c-0004-4bb1-9cbf-c406fe444b3e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4rshf" [dfac001c-0004-4bb1-9cbf-c406fe444b3e] Running
E0108 19:20:24.145188   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.005718579s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-798000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-798000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-798000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-798000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-798000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-798000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (51.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-798000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker 
E0108 19:20:50.532816   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-798000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker : (51.292272059s)
--- PASS: TestNetworkPlugins/group/flannel/Start (51.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (45.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-798000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-798000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker : (45.030225323s)
--- PASS: TestNetworkPlugins/group/bridge/Start (45.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-798000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-798000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7zhtz" [bcdf6940-5c5d-4607-880c-d42cd7b451d8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7zhtz" [bcdf6940-5c5d-4607-880c-d42cd7b451d8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.00375382s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-fqxlg" [05d56154-6614-406d-8d76-d59cc43ad942] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00592329s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-798000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-798000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-g927f" [ccd8dc1e-45a8-4e3d-8925-0fb94edf046f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-g927f" [ccd8dc1e-45a8-4e3d-8925-0fb94edf046f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.005574157s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-798000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-798000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-798000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-798000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-798000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-798000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (65.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-798000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker 
E0108 19:22:13.178757   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-798000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker : (1m5.192701146s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (65.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-798000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-798000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zlz2l" [c79a27d1-f51b-4b8e-b57c-9cce39d16093] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zlz2l" [c79a27d1-f51b-4b8e-b57c-9cce39d16093] Running
E0108 19:23:24.859415   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.005325157s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-798000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-798000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-798000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (149.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-363000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.29.0-rc.2
E0108 19:23:52.601124   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/calico-798000/client.crt: no such file or directory
E0108 19:23:52.606307   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/calico-798000/client.crt: no such file or directory
E0108 19:23:52.616431   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/calico-798000/client.crt: no such file or directory
E0108 19:23:52.638412   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/calico-798000/client.crt: no such file or directory
E0108 19:23:52.679469   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/calico-798000/client.crt: no such file or directory
E0108 19:23:52.759649   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/calico-798000/client.crt: no such file or directory
E0108 19:23:52.920338   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/calico-798000/client.crt: no such file or directory
E0108 19:23:53.240445   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/calico-798000/client.crt: no such file or directory
E0108 19:23:53.880814   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/calico-798000/client.crt: no such file or directory
E0108 19:23:55.160935   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/calico-798000/client.crt: no such file or directory
E0108 19:23:57.721154   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/calico-798000/client.crt: no such file or directory
E0108 19:24:01.309339   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/custom-flannel-798000/client.crt: no such file or directory
E0108 19:24:01.314479   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/custom-flannel-798000/client.crt: no such file or directory
E0108 19:24:01.324643   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/custom-flannel-798000/client.crt: no such file or directory
E0108 19:24:01.344928   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/custom-flannel-798000/client.crt: no such file or directory
E0108 19:24:01.385100   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/custom-flannel-798000/client.crt: no such file or directory
E0108 19:24:01.465766   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/custom-flannel-798000/client.crt: no such file or directory
E0108 19:24:01.626032   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/custom-flannel-798000/client.crt: no such file or directory
E0108 19:24:01.946590   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/custom-flannel-798000/client.crt: no such file or directory
E0108 19:24:02.586755   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/custom-flannel-798000/client.crt: no such file or directory
E0108 19:24:02.841236   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/calico-798000/client.crt: no such file or directory
E0108 19:24:03.866935   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/custom-flannel-798000/client.crt: no such file or directory
E0108 19:24:06.427329   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/custom-flannel-798000/client.crt: no such file or directory
E0108 19:24:11.547366   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/custom-flannel-798000/client.crt: no such file or directory
E0108 19:24:13.143885   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/calico-798000/client.crt: no such file or directory
E0108 19:24:21.787319   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/custom-flannel-798000/client.crt: no such file or directory
E0108 19:24:33.624378   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/calico-798000/client.crt: no such file or directory
E0108 19:24:42.267498   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/custom-flannel-798000/client.crt: no such file or directory
E0108 19:24:46.778378   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/client.crt: no such file or directory
E0108 19:25:09.544397   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kindnet-798000/client.crt: no such file or directory
E0108 19:25:14.583692   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/calico-798000/client.crt: no such file or directory
E0108 19:25:15.311729   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/false-798000/client.crt: no such file or directory
E0108 19:25:15.317211   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/false-798000/client.crt: no such file or directory
E0108 19:25:15.327298   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/false-798000/client.crt: no such file or directory
E0108 19:25:15.347990   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/false-798000/client.crt: no such file or directory
E0108 19:25:15.388951   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/false-798000/client.crt: no such file or directory
E0108 19:25:15.470960   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/false-798000/client.crt: no such file or directory
E0108 19:25:15.633023   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/false-798000/client.crt: no such file or directory
E0108 19:25:15.953351   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/false-798000/client.crt: no such file or directory
E0108 19:25:16.593831   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/false-798000/client.crt: no such file or directory
E0108 19:25:16.707292   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/enable-default-cni-798000/client.crt: no such file or directory
E0108 19:25:16.713627   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/enable-default-cni-798000/client.crt: no such file or directory
E0108 19:25:16.723965   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/enable-default-cni-798000/client.crt: no such file or directory
E0108 19:25:16.744115   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/enable-default-cni-798000/client.crt: no such file or directory
E0108 19:25:16.784882   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/enable-default-cni-798000/client.crt: no such file or directory
E0108 19:25:16.864956   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/enable-default-cni-798000/client.crt: no such file or directory
E0108 19:25:17.025162   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/enable-default-cni-798000/client.crt: no such file or directory
E0108 19:25:17.346259   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/enable-default-cni-798000/client.crt: no such file or directory
E0108 19:25:17.873887   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/false-798000/client.crt: no such file or directory
E0108 19:25:17.988348   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/enable-default-cni-798000/client.crt: no such file or directory
E0108 19:25:19.268768   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/enable-default-cni-798000/client.crt: no such file or directory
E0108 19:25:20.462726   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/false-798000/client.crt: no such file or directory
E0108 19:25:21.828999   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/enable-default-cni-798000/client.crt: no such file or directory
E0108 19:25:23.226742   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/custom-flannel-798000/client.crt: no such file or directory
E0108 19:25:24.138587   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0108 19:25:25.582852   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/false-798000/client.crt: no such file or directory
E0108 19:25:26.950380   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/enable-default-cni-798000/client.crt: no such file or directory
E0108 19:25:35.823052   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/false-798000/client.crt: no such file or directory
E0108 19:25:37.190305   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/enable-default-cni-798000/client.crt: no such file or directory
E0108 19:25:50.524315   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
E0108 19:25:56.303737   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/false-798000/client.crt: no such file or directory
E0108 19:25:57.670143   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/enable-default-cni-798000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-363000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.29.0-rc.2: (2m29.547766734s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (149.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-363000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ad203b3f-2381-4ccc-bbc6-18912f2bffe6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ad203b3f-2381-4ccc-bbc6-18912f2bffe6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.004091511s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-363000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-363000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-363000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.108709008s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-363000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-363000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-363000 --alsologtostderr -v=3: (10.998814226s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-363000 -n no-preload-363000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-363000 -n no-preload-363000: exit status 7 (111.682003ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-363000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0108 19:26:41.505100   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/flannel-798000/client.crt: no such file or directory
E0108 19:26:41.511521   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/flannel-798000/client.crt: no such file or directory
E0108 19:26:41.523746   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/flannel-798000/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (340.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-363000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.29.0-rc.2
E0108 19:26:41.546113   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/flannel-798000/client.crt: no such file or directory
E0108 19:26:41.586753   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/flannel-798000/client.crt: no such file or directory
E0108 19:26:41.667316   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/flannel-798000/client.crt: no such file or directory
E0108 19:26:41.828688   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/flannel-798000/client.crt: no such file or directory
E0108 19:26:42.053390   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/bridge-798000/client.crt: no such file or directory
E0108 19:26:42.150339   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/flannel-798000/client.crt: no such file or directory
E0108 19:26:42.790649   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/flannel-798000/client.crt: no such file or directory
E0108 19:26:44.071781   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/flannel-798000/client.crt: no such file or directory
E0108 19:26:45.145156   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/custom-flannel-798000/client.crt: no such file or directory
E0108 19:26:46.631893   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/flannel-798000/client.crt: no such file or directory
E0108 19:26:47.174140   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/bridge-798000/client.crt: no such file or directory
E0108 19:26:51.752364   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/flannel-798000/client.crt: no such file or directory
E0108 19:26:57.414093   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/bridge-798000/client.crt: no such file or directory
E0108 19:27:01.993250   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/flannel-798000/client.crt: no such file or directory
E0108 19:27:02.933058   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/client.crt: no such file or directory
E0108 19:27:07.628291   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/skaffold-709000/client.crt: no such file or directory
E0108 19:27:17.894161   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/bridge-798000/client.crt: no such file or directory
E0108 19:27:22.473262   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/flannel-798000/client.crt: no such file or directory
E0108 19:27:25.700480   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kindnet-798000/client.crt: no such file or directory
E0108 19:27:30.615273   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/auto-798000/client.crt: no such file or directory
E0108 19:27:53.381285   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kindnet-798000/client.crt: no such file or directory
E0108 19:27:58.853881   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/bridge-798000/client.crt: no such file or directory
E0108 19:27:59.182071   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/false-798000/client.crt: no such file or directory
E0108 19:28:00.549526   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/enable-default-cni-798000/client.crt: no such file or directory
E0108 19:28:03.433081   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/flannel-798000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-363000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.29.0-rc.2: (5m39.81776853s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-363000 -n no-preload-363000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (340.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-901000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-901000 --alsologtostderr -v=3: (1.627626684s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-901000 -n old-k8s-version-901000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-901000 -n old-k8s-version-901000: exit status 7 (108.866475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-901000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7fksm" [5f6c12e1-004d-4bcf-9116-4f9904c3080d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0108 19:32:25.692696   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kindnet-798000/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7fksm" [5f6c12e1-004d-4bcf-9116-4f9904c3080d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.005730675s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7fksm" [5f6c12e1-004d-4bcf-9116-4f9904c3080d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003809s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-363000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-363000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-363000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-363000 -n no-preload-363000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-363000 -n no-preload-363000: exit status 2 (397.586794ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-363000 -n no-preload-363000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-363000 -n no-preload-363000: exit status 2 (400.31213ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-363000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-363000 -n no-preload-363000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-363000 -n no-preload-363000
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (75.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-689000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.28.4
E0108 19:33:16.981120   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubenet-798000/client.crt: no such file or directory
E0108 19:33:44.666009   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubenet-798000/client.crt: no such file or directory
E0108 19:33:52.587877   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/calico-798000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-689000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.28.4: (1m15.282351258s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (75.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-689000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [11deda04-652e-4fac-ad57-c905882f6580] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0108 19:34:01.297021   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/custom-flannel-798000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [11deda04-652e-4fac-ad57-c905882f6580] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004686025s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-689000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-689000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-689000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.036431748s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-689000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-689000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-689000 --alsologtostderr -v=3: (10.969496123s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-689000 -n embed-certs-689000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-689000 -n embed-certs-689000: exit status 7 (111.619637ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-689000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (560.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-689000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.28.4
E0108 19:35:15.299003   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/false-798000/client.crt: no such file or directory
E0108 19:35:16.692991   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/enable-default-cni-798000/client.crt: no such file or directory
E0108 19:35:24.125256   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0108 19:35:33.567526   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
E0108 19:35:50.511234   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
E0108 19:36:21.786769   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/no-preload-363000/client.crt: no such file or directory
E0108 19:36:21.793221   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/no-preload-363000/client.crt: no such file or directory
E0108 19:36:21.803819   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/no-preload-363000/client.crt: no such file or directory
E0108 19:36:21.825339   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/no-preload-363000/client.crt: no such file or directory
E0108 19:36:21.867476   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/no-preload-363000/client.crt: no such file or directory
E0108 19:36:21.948521   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/no-preload-363000/client.crt: no such file or directory
E0108 19:36:22.108770   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/no-preload-363000/client.crt: no such file or directory
E0108 19:36:22.429309   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/no-preload-363000/client.crt: no such file or directory
E0108 19:36:23.069460   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/no-preload-363000/client.crt: no such file or directory
E0108 19:36:24.350484   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/no-preload-363000/client.crt: no such file or directory
E0108 19:36:26.910821   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/no-preload-363000/client.crt: no such file or directory
E0108 19:36:32.031377   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/no-preload-363000/client.crt: no such file or directory
E0108 19:36:36.922349   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/bridge-798000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-689000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.28.4: (9m20.485697967s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-689000 -n embed-certs-689000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (560.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vczqw" [b38754f2-10cf-4779-837c-f81a8392a242] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005203751s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vczqw" [b38754f2-10cf-4779-837c-f81a8392a242] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007006616s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-689000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-689000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-689000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-689000 -n embed-certs-689000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-689000 -n embed-certs-689000: exit status 2 (400.633217ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-689000 -n embed-certs-689000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-689000 -n embed-certs-689000: exit status 2 (398.823567ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-689000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-689000 -n embed-certs-689000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-689000 -n embed-certs-689000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (36.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-735000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-735000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.28.4: (36.602642146s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (36.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-735000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3d1107a0-b75d-4370-97a1-13405bb9a303] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0108 19:44:40.046907   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/kubenet-798000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [3d1107a0-b75d-4370-97a1-13405bb9a303] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.006300597s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-735000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-735000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-735000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.05413256s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-735000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-735000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-735000 --alsologtostderr -v=3: (10.984988403s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-735000 -n default-k8s-diff-port-735000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-735000 -n default-k8s-diff-port-735000: exit status 7 (111.070748ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-735000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (333.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-735000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-735000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.28.4: (5m32.990573048s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-735000 -n default-k8s-diff-port-735000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (333.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (19.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-hkk6h" [0560c31d-e515-4569-be54-3e060e495130] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-hkk6h" [0560c31d-e515-4569-be54-3e060e495130] Running
E0108 19:50:50.524623   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 19.004888588s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (19.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-hkk6h" [0560c31d-e515-4569-be54-3e060e495130] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00501849s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-735000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-diff-port-735000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-735000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-735000 -n default-k8s-diff-port-735000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-735000 -n default-k8s-diff-port-735000: exit status 2 (407.105126ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-735000 -n default-k8s-diff-port-735000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-735000 -n default-k8s-diff-port-735000: exit status 2 (403.452256ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-735000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-735000 -n default-k8s-diff-port-735000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-735000 -n default-k8s-diff-port-735000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (33.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-103000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-103000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.29.0-rc.2: (33.25082242s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (33.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-103000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-103000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.202317249s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-103000 --alsologtostderr -v=3
E0108 19:51:41.505600   75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/flannel-798000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-103000 --alsologtostderr -v=3: (10.926215036s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-103000 -n newest-cni-103000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-103000 -n newest-cni-103000: exit status 7 (112.122585ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-103000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (27.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-103000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-103000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.29.0-rc.2: (27.402454893s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-103000 -n newest-cni-103000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (27.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-103000 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-103000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-103000 -n newest-cni-103000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-103000 -n newest-cni-103000: exit status 2 (403.768629ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-103000 -n newest-cni-103000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-103000 -n newest-cni-103000: exit status 2 (396.289104ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-103000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-103000 -n newest-cni-103000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-103000 -n newest-cni-103000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.21s)

                                                
                                    

Test skip (23/329)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (12.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 17.221413ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-vrr55" [c2c45654-82ed-45f0-8712-2bbcc00a3d05] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.007235776s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-g8zm6" [ee09bce0-7b63-4222-9233-3e4905f507d2] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006302425s
addons_test.go:340: (dbg) Run:  kubectl --context addons-388000 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-388000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-388000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.905168205s)
addons_test.go:355: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (12.99s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (16.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-388000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-388000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-388000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [0da22734-55a9-4316-a3b8-4e510b51269f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [0da22734-55a9-4316-a3b8-4e510b51269f] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 15.003484027s
addons_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 -p addons-388000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:282: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (16.09s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-142000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-142000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-jdg67" [35b4f713-6721-46c6-b411-19a371dcc7c1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-jdg67" [35b4f713-6721-46c6-b411-19a371dcc7c1] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.005853336s
functional_test.go:1645: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (7.13s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (15.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-142000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3368159496/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1704768132037468000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3368159496/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1704768132037468000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3368159496/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1704768132037468000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3368159496/001/test-1704768132037468000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (430.746587ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (408.546046ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (502.290639ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (408.099626ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (396.617698ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (438.255174ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (428.639959ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
2024/01/08 18:42:25 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (509.668254ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 ssh "sudo umount -f /mount-9p": exit status 1 (489.868852ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:92: "out/minikube-darwin-amd64 -p functional-142000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-142000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3368159496/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (15.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (15.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-142000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port4006327017/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (508.737998ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (369.416863ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (361.658427ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (378.132092ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (357.297171ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (359.836907ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (355.428932ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 ssh "sudo umount -f /mount-9p": exit status 1 (376.653376ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-142000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-142000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port4006327017/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (15.49s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-798000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-798000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-798000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-798000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-798000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-798000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-798000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-798000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-798000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-798000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-798000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-798000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-798000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-798000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-798000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-798000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-798000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-798000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-798000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-798000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-798000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-798000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-798000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-798000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-798000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-798000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-798000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-798000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-798000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-798000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-798000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-798000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-798000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-798000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-798000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-798000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-798000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-798000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-798000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-798000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-798000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-798000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-798000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-798000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-798000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-798000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-798000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-798000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-798000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-798000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-798000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-798000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-798000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-798000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-798000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-798000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-798000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-798000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-798000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-798000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-798000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-798000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-798000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-798000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-798000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798000"

                                                
                                                
----------------------- debugLogs end: cilium-798000 [took: 5.924672592s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-798000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-798000
--- SKIP: TestNetworkPlugins/group/cilium (6.43s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-336000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-336000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.40s)

                                                
                                    
Copied to clipboard