Test Report: Docker_macOS 18169

                    
                      248a87e642b5c2a9040ef2ce1129e71918aa65a4:2024-02-13:33129
                    
                

Test fail (12/333)

x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (276.31s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-181000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0213 15:09:17.230790    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
E0213 15:09:44.920960    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
E0213 15:10:14.207769    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 15:10:14.214287    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 15:10:14.225711    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 15:10:14.247634    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 15:10:14.289876    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 15:10:14.370593    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 15:10:14.530775    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 15:10:14.851685    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 15:10:15.493944    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 15:10:16.775935    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 15:10:19.336850    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 15:10:24.457144    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 15:10:34.698785    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 15:10:55.179406    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 15:11:36.141189    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 15:12:58.060956    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-181000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m36.264696523s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-181000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18169
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18169-6320/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18169-6320/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-181000 in cluster ingress-addon-legacy-181000
	* Pulling base image v0.0.42-1704759386-17866 ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:09:03.594107    9652 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:09:03.594358    9652 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:09:03.594363    9652 out.go:304] Setting ErrFile to fd 2...
	I0213 15:09:03.594367    9652 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:09:03.594539    9652 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18169-6320/.minikube/bin
	I0213 15:09:03.596040    9652 out.go:298] Setting JSON to false
	I0213 15:09:03.618659    9652 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":2603,"bootTime":1707863140,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0213 15:09:03.618764    9652 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:09:03.640548    9652 out.go:177] * [ingress-addon-legacy-181000] minikube v1.32.0 on Darwin 14.3.1
	I0213 15:09:03.683455    9652 out.go:177]   - MINIKUBE_LOCATION=18169
	I0213 15:09:03.683537    9652 notify.go:220] Checking for updates...
	I0213 15:09:03.705468    9652 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18169-6320/kubeconfig
	I0213 15:09:03.727293    9652 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0213 15:09:03.749378    9652 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:09:03.771227    9652 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18169-6320/.minikube
	I0213 15:09:03.792218    9652 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:09:03.814615    9652 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:09:03.870626    9652 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0213 15:09:03.870800    9652 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 15:09:03.973520    9652 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:110 SystemTime:2024-02-13 23:09:03.964251246 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 15:09:03.994573    9652 out.go:177] * Using the docker driver based on user configuration
	I0213 15:09:04.036662    9652 start.go:298] selected driver: docker
	I0213 15:09:04.036690    9652 start.go:902] validating driver "docker" against <nil>
	I0213 15:09:04.036704    9652 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:09:04.040402    9652 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 15:09:04.146908    9652 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:110 SystemTime:2024-02-13 23:09:04.13775674 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=
cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker De
v Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) f
or an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 15:09:04.147092    9652 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 15:09:04.147293    9652 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 15:09:04.168907    9652 out.go:177] * Using Docker Desktop driver with root privileges
	I0213 15:09:04.191700    9652 cni.go:84] Creating CNI manager for ""
	I0213 15:09:04.191729    9652 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0213 15:09:04.191744    9652 start_flags.go:321] config:
	{Name:ingress-addon-legacy-181000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-181000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:09:04.213787    9652 out.go:177] * Starting control plane node ingress-addon-legacy-181000 in cluster ingress-addon-legacy-181000
	I0213 15:09:04.256976    9652 cache.go:121] Beginning downloading kic base image for docker with docker
	I0213 15:09:04.278907    9652 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0213 15:09:04.320817    9652 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0213 15:09:04.320914    9652 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0213 15:09:04.371203    9652 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0213 15:09:04.371224    9652 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0213 15:09:04.603433    9652 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0213 15:09:04.603462    9652 cache.go:56] Caching tarball of preloaded images
	I0213 15:09:04.603842    9652 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0213 15:09:04.625718    9652 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0213 15:09:04.647192    9652 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0213 15:09:05.189528    9652 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0213 15:09:22.641961    9652 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0213 15:09:22.642149    9652 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0213 15:09:23.274186    9652 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0213 15:09:23.274437    9652 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/config.json ...
	I0213 15:09:23.274463    9652 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/config.json: {Name:mkfb5116497b5ef5e775e10a45eb25bdca5f4bb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:09:23.274756    9652 cache.go:194] Successfully downloaded all kic artifacts
	I0213 15:09:23.274787    9652 start.go:365] acquiring machines lock for ingress-addon-legacy-181000: {Name:mk7bdde0987fe3a73821b7b521ea63475abe23f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:09:23.274879    9652 start.go:369] acquired machines lock for "ingress-addon-legacy-181000" in 84.602µs
	I0213 15:09:23.274899    9652 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-181000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-181000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:09:23.274953    9652 start.go:125] createHost starting for "" (driver="docker")
	I0213 15:09:23.301242    9652 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0213 15:09:23.301602    9652 start.go:159] libmachine.API.Create for "ingress-addon-legacy-181000" (driver="docker")
	I0213 15:09:23.301648    9652 client.go:168] LocalClient.Create starting
	I0213 15:09:23.301847    9652 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem
	I0213 15:09:23.301940    9652 main.go:141] libmachine: Decoding PEM data...
	I0213 15:09:23.301976    9652 main.go:141] libmachine: Parsing certificate...
	I0213 15:09:23.302057    9652 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem
	I0213 15:09:23.302127    9652 main.go:141] libmachine: Decoding PEM data...
	I0213 15:09:23.302144    9652 main.go:141] libmachine: Parsing certificate...
	I0213 15:09:23.322678    9652 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-181000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0213 15:09:23.375110    9652 cli_runner.go:211] docker network inspect ingress-addon-legacy-181000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0213 15:09:23.375238    9652 network_create.go:281] running [docker network inspect ingress-addon-legacy-181000] to gather additional debugging logs...
	I0213 15:09:23.375259    9652 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-181000
	W0213 15:09:23.425810    9652 cli_runner.go:211] docker network inspect ingress-addon-legacy-181000 returned with exit code 1
	I0213 15:09:23.425853    9652 network_create.go:284] error running [docker network inspect ingress-addon-legacy-181000]: docker network inspect ingress-addon-legacy-181000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-181000 not found
	I0213 15:09:23.425869    9652 network_create.go:286] output of [docker network inspect ingress-addon-legacy-181000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-181000 not found
	
	** /stderr **
	I0213 15:09:23.426031    9652 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0213 15:09:23.476436    9652 network.go:207] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0004b9810}
	I0213 15:09:23.476472    9652 network_create.go:124] attempt to create docker network ingress-addon-legacy-181000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
	I0213 15:09:23.476539    9652 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-181000 ingress-addon-legacy-181000
	I0213 15:09:23.563711    9652 network_create.go:108] docker network ingress-addon-legacy-181000 192.168.49.0/24 created
	I0213 15:09:23.563755    9652 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-181000" container
	I0213 15:09:23.563882    9652 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0213 15:09:23.625108    9652 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-181000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-181000 --label created_by.minikube.sigs.k8s.io=true
	I0213 15:09:23.676626    9652 oci.go:103] Successfully created a docker volume ingress-addon-legacy-181000
	I0213 15:09:23.676756    9652 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-181000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-181000 --entrypoint /usr/bin/test -v ingress-addon-legacy-181000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0213 15:09:24.063790    9652 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-181000
	I0213 15:09:24.063833    9652 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0213 15:09:24.063848    9652 kic.go:194] Starting extracting preloaded images to volume ...
	I0213 15:09:24.063963    9652 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-181000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0213 15:09:26.401244    9652 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-181000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (2.337239049s)
	I0213 15:09:26.401272    9652 kic.go:203] duration metric: took 2.337453 seconds to extract preloaded images to volume
	I0213 15:09:26.401396    9652 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0213 15:09:26.504609    9652 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-181000 --name ingress-addon-legacy-181000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-181000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-181000 --network ingress-addon-legacy-181000 --ip 192.168.49.2 --volume ingress-addon-legacy-181000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0213 15:09:26.762006    9652 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-181000 --format={{.State.Running}}
	I0213 15:09:26.817017    9652 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-181000 --format={{.State.Status}}
	I0213 15:09:26.875752    9652 cli_runner.go:164] Run: docker exec ingress-addon-legacy-181000 stat /var/lib/dpkg/alternatives/iptables
	I0213 15:09:27.050434    9652 oci.go:144] the created container "ingress-addon-legacy-181000" has a running status.
	I0213 15:09:27.050518    9652 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/ingress-addon-legacy-181000/id_rsa...
	I0213 15:09:27.212222    9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/ingress-addon-legacy-181000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0213 15:09:27.212293    9652 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/ingress-addon-legacy-181000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0213 15:09:27.276816    9652 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-181000 --format={{.State.Status}}
	I0213 15:09:27.332054    9652 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0213 15:09:27.332077    9652 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-181000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0213 15:09:27.433181    9652 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-181000 --format={{.State.Status}}
	I0213 15:09:27.484108    9652 machine.go:88] provisioning docker machine ...
	I0213 15:09:27.484166    9652 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-181000"
	I0213 15:09:27.484262    9652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-181000
	I0213 15:09:27.536381    9652 main.go:141] libmachine: Using SSH client type: native
	I0213 15:09:27.536720    9652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 53249 <nil> <nil>}
	I0213 15:09:27.536736    9652 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-181000 && echo "ingress-addon-legacy-181000" | sudo tee /etc/hostname
	I0213 15:09:27.702473    9652 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-181000
	
	I0213 15:09:27.702572    9652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-181000
	I0213 15:09:27.755020    9652 main.go:141] libmachine: Using SSH client type: native
	I0213 15:09:27.755330    9652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 53249 <nil> <nil>}
	I0213 15:09:27.755346    9652 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-181000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-181000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-181000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 15:09:27.893812    9652 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 15:09:27.893830    9652 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/18169-6320/.minikube CaCertPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18169-6320/.minikube}
	I0213 15:09:27.893852    9652 ubuntu.go:177] setting up certificates
	I0213 15:09:27.893858    9652 provision.go:83] configureAuth start
	I0213 15:09:27.893929    9652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-181000
	I0213 15:09:27.945748    9652 provision.go:138] copyHostCerts
	I0213 15:09:27.945801    9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.pem
	I0213 15:09:27.945859    9652 exec_runner.go:144] found /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.pem, removing ...
	I0213 15:09:27.945868    9652 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.pem
	I0213 15:09:27.945977    9652 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.pem (1078 bytes)
	I0213 15:09:27.946156    9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18169-6320/.minikube/cert.pem
	I0213 15:09:27.946183    9652 exec_runner.go:144] found /Users/jenkins/minikube-integration/18169-6320/.minikube/cert.pem, removing ...
	I0213 15:09:27.946187    9652 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18169-6320/.minikube/cert.pem
	I0213 15:09:27.946304    9652 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18169-6320/.minikube/cert.pem (1123 bytes)
	I0213 15:09:27.946475    9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18169-6320/.minikube/key.pem
	I0213 15:09:27.946518    9652 exec_runner.go:144] found /Users/jenkins/minikube-integration/18169-6320/.minikube/key.pem, removing ...
	I0213 15:09:27.946523    9652 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18169-6320/.minikube/key.pem
	I0213 15:09:27.946602    9652 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18169-6320/.minikube/key.pem (1675 bytes)
	I0213 15:09:27.946757    9652 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-181000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-181000]
	I0213 15:09:28.118046    9652 provision.go:172] copyRemoteCerts
	I0213 15:09:28.118100    9652 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 15:09:28.118161    9652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-181000
	I0213 15:09:28.169302    9652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53249 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/ingress-addon-legacy-181000/id_rsa Username:docker}
	I0213 15:09:28.271875    9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0213 15:09:28.271955    9652 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0213 15:09:28.311039    9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0213 15:09:28.311108    9652 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0213 15:09:28.350632    9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0213 15:09:28.350715    9652 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 15:09:28.390750    9652 provision.go:86] duration metric: configureAuth took 496.841679ms
	I0213 15:09:28.390764    9652 ubuntu.go:193] setting minikube options for container-runtime
	I0213 15:09:28.390972    9652 config.go:182] Loaded profile config "ingress-addon-legacy-181000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0213 15:09:28.391069    9652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-181000
	I0213 15:09:28.443733    9652 main.go:141] libmachine: Using SSH client type: native
	I0213 15:09:28.444024    9652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 53249 <nil> <nil>}
	I0213 15:09:28.444041    9652 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0213 15:09:28.581127    9652 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0213 15:09:28.581145    9652 ubuntu.go:71] root file system type: overlay
	I0213 15:09:28.581228    9652 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0213 15:09:28.581309    9652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-181000
	I0213 15:09:28.631890    9652 main.go:141] libmachine: Using SSH client type: native
	I0213 15:09:28.632180    9652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 53249 <nil> <nil>}
	I0213 15:09:28.632227    9652 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0213 15:09:28.794951    9652 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0213 15:09:28.795050    9652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-181000
	I0213 15:09:28.847257    9652 main.go:141] libmachine: Using SSH client type: native
	I0213 15:09:28.847564    9652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 53249 <nil> <nil>}
	I0213 15:09:28.847577    9652 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0213 15:09:29.473047    9652 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:22.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-13 23:09:28.789911782 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0213 15:09:29.473071    9652 machine.go:91] provisioned docker machine in 1.988955159s
	I0213 15:09:29.473077    9652 client.go:171] LocalClient.Create took 6.171495952s
	I0213 15:09:29.473099    9652 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-181000" took 6.171574084s
	I0213 15:09:29.473107    9652 start.go:300] post-start starting for "ingress-addon-legacy-181000" (driver="docker")
	I0213 15:09:29.473115    9652 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 15:09:29.473178    9652 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 15:09:29.473239    9652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-181000
	I0213 15:09:29.525043    9652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53249 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/ingress-addon-legacy-181000/id_rsa Username:docker}
	I0213 15:09:29.629655    9652 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 15:09:29.633582    9652 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0213 15:09:29.633605    9652 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0213 15:09:29.633612    9652 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0213 15:09:29.633618    9652 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0213 15:09:29.633628    9652 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18169-6320/.minikube/addons for local assets ...
	I0213 15:09:29.633729    9652 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18169-6320/.minikube/files for local assets ...
	I0213 15:09:29.633915    9652 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem -> 67762.pem in /etc/ssl/certs
	I0213 15:09:29.633921    9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem -> /etc/ssl/certs/67762.pem
	I0213 15:09:29.634122    9652 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 15:09:29.648529    9652 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem --> /etc/ssl/certs/67762.pem (1708 bytes)
	I0213 15:09:29.687873    9652 start.go:303] post-start completed in 214.759349ms
	I0213 15:09:29.688761    9652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-181000
	I0213 15:09:29.741235    9652 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/config.json ...
	I0213 15:09:29.741685    9652 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0213 15:09:29.741762    9652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-181000
	I0213 15:09:29.793189    9652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53249 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/ingress-addon-legacy-181000/id_rsa Username:docker}
	I0213 15:09:29.886756    9652 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0213 15:09:29.891474    9652 start.go:128] duration metric: createHost completed in 6.616585669s
	I0213 15:09:29.891494    9652 start.go:83] releasing machines lock for "ingress-addon-legacy-181000", held for 6.616685649s
	I0213 15:09:29.891586    9652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-181000
	I0213 15:09:29.942335    9652 ssh_runner.go:195] Run: cat /version.json
	I0213 15:09:29.942361    9652 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 15:09:29.942412    9652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-181000
	I0213 15:09:29.942435    9652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-181000
	I0213 15:09:29.999289    9652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53249 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/ingress-addon-legacy-181000/id_rsa Username:docker}
	I0213 15:09:29.999343    9652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53249 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/ingress-addon-legacy-181000/id_rsa Username:docker}
	I0213 15:09:30.200135    9652 ssh_runner.go:195] Run: systemctl --version
	I0213 15:09:30.204802    9652 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0213 15:09:30.209910    9652 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0213 15:09:30.251292    9652 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0213 15:09:30.251378    9652 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0213 15:09:30.279163    9652 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0213 15:09:30.307994    9652 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 15:09:30.308010    9652 start.go:475] detecting cgroup driver to use...
	I0213 15:09:30.308022    9652 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 15:09:30.308122    9652 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 15:09:30.336005    9652 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0213 15:09:30.352301    9652 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0213 15:09:30.368855    9652 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0213 15:09:30.368937    9652 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0213 15:09:30.385399    9652 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 15:09:30.402042    9652 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0213 15:09:30.418751    9652 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 15:09:30.434210    9652 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 15:09:30.449653    9652 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0213 15:09:30.465762    9652 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 15:09:30.480796    9652 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 15:09:30.496072    9652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 15:09:30.559159    9652 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0213 15:09:30.648069    9652 start.go:475] detecting cgroup driver to use...
	I0213 15:09:30.648103    9652 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 15:09:30.648214    9652 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0213 15:09:30.667414    9652 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0213 15:09:30.667484    9652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0213 15:09:30.686980    9652 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 15:09:30.718409    9652 ssh_runner.go:195] Run: which cri-dockerd
	I0213 15:09:30.722695    9652 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0213 15:09:30.738140    9652 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0213 15:09:30.767910    9652 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0213 15:09:30.855187    9652 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0213 15:09:30.920942    9652 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0213 15:09:30.921088    9652 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0213 15:09:30.951648    9652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 15:09:31.014921    9652 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 15:09:31.254985    9652 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 15:09:31.278515    9652 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 15:09:31.347047    9652 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	I0213 15:09:31.347178    9652 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-181000 dig +short host.docker.internal
	I0213 15:09:31.469345    9652 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0213 15:09:31.469492    9652 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0213 15:09:31.474024    9652 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 15:09:31.491134    9652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-181000
	I0213 15:09:31.544727    9652 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0213 15:09:31.544818    9652 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 15:09:31.563087    9652 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0213 15:09:31.563101    9652 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0213 15:09:31.563158    9652 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0213 15:09:31.578017    9652 ssh_runner.go:195] Run: which lz4
	I0213 15:09:31.582436    9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0213 15:09:31.582545    9652 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0213 15:09:31.586858    9652 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 15:09:31.586885    9652 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
	I0213 15:09:38.376589    9652 docker.go:649] Took 6.794167 seconds to copy over tarball
	I0213 15:09:38.376721    9652 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 15:09:40.080819    9652 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.70403556s)
	I0213 15:09:40.080853    9652 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 15:09:40.137763    9652 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0213 15:09:40.153476    9652 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0213 15:09:40.182812    9652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 15:09:40.248363    9652 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 15:09:41.294707    9652 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.046321694s)
	I0213 15:09:41.294872    9652 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 15:09:41.313750    9652 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0213 15:09:41.313765    9652 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0213 15:09:41.313777    9652 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0213 15:09:41.319602    9652 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0213 15:09:41.319636    9652 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 15:09:41.319628    9652 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0213 15:09:41.320012    9652 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0213 15:09:41.320094    9652 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0213 15:09:41.320319    9652 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0213 15:09:41.320528    9652 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0213 15:09:41.320558    9652 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0213 15:09:41.324669    9652 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0213 15:09:41.325083    9652 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0213 15:09:41.326510    9652 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0213 15:09:41.326590    9652 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0213 15:09:41.326773    9652 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 15:09:41.326770    9652 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0213 15:09:41.326869    9652 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0213 15:09:41.327037    9652 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0213 15:09:43.245367    9652 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0213 15:09:43.265928    9652 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0213 15:09:43.265968    9652 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0213 15:09:43.266039    9652 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0213 15:09:43.284110    9652 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0213 15:09:43.289184    9652 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0213 15:09:43.307733    9652 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0213 15:09:43.307765    9652 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0213 15:09:43.307826    9652 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0213 15:09:43.326100    9652 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0213 15:09:43.332383    9652 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0213 15:09:43.350470    9652 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0213 15:09:43.350496    9652 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0213 15:09:43.350552    9652 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0213 15:09:43.364805    9652 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0213 15:09:43.368402    9652 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0213 15:09:43.375707    9652 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0213 15:09:43.376292    9652 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0213 15:09:43.385279    9652 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0213 15:09:43.385305    9652 docker.go:337] Removing image: registry.k8s.io/pause:3.2
	I0213 15:09:43.385382    9652 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0213 15:09:43.385668    9652 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0213 15:09:43.400033    9652 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0213 15:09:43.400063    9652 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0213 15:09:43.400128    9652 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0213 15:09:43.400150    9652 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0213 15:09:43.400165    9652 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0213 15:09:43.400220    9652 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0213 15:09:43.410430    9652 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0213 15:09:43.410784    9652 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0213 15:09:43.410811    9652 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.7
	I0213 15:09:43.410867    9652 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0213 15:09:43.443755    9652 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0213 15:09:43.444163    9652 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0213 15:09:43.452418    9652 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0213 15:09:43.863590    9652 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 15:09:43.883075    9652 cache_images.go:92] LoadImages completed in 2.569314804s
	W0213 15:09:43.883125    9652 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20: no such file or directory
	I0213 15:09:43.883207    9652 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0213 15:09:43.930258    9652 cni.go:84] Creating CNI manager for ""
	I0213 15:09:43.930280    9652 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0213 15:09:43.930297    9652 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 15:09:43.930316    9652 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-181000 NodeName:ingress-addon-legacy-181000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0213 15:09:43.930443    9652 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-181000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 15:09:43.930519    9652 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-181000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-181000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 15:09:43.930581    9652 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0213 15:09:43.945828    9652 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 15:09:43.945930    9652 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 15:09:43.961559    9652 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0213 15:09:43.990307    9652 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0213 15:09:44.019355    9652 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I0213 15:09:44.049742    9652 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0213 15:09:44.054378    9652 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 15:09:44.071236    9652 certs.go:56] Setting up /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000 for IP: 192.168.49.2
	I0213 15:09:44.071257    9652 certs.go:190] acquiring lock for shared ca certs: {Name:mkc037f48c69539d66bd92ede4890b05c28518b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:09:44.071429    9652 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.key
	I0213 15:09:44.071504    9652 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18169-6320/.minikube/proxy-client-ca.key
	I0213 15:09:44.071553    9652 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/client.key
	I0213 15:09:44.071569    9652 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/client.crt with IP's: []
	I0213 15:09:44.271303    9652 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/client.crt ...
	I0213 15:09:44.271317    9652 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/client.crt: {Name:mkb1064f16bfde5f75907db94e49fd65a44aa1be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:09:44.271634    9652 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/client.key ...
	I0213 15:09:44.271643    9652 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/client.key: {Name:mkffe42e448aba377178de2e6d44d591e5c6694c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:09:44.271862    9652 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/apiserver.key.dd3b5fb2
	I0213 15:09:44.271883    9652 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0213 15:09:44.390905    9652 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/apiserver.crt.dd3b5fb2 ...
	I0213 15:09:44.390916    9652 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/apiserver.crt.dd3b5fb2: {Name:mkea8110efdc719375bd451115da36144123d377 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:09:44.391180    9652 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/apiserver.key.dd3b5fb2 ...
	I0213 15:09:44.391192    9652 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/apiserver.key.dd3b5fb2: {Name:mkf19426e626e1a69599caf73770b2e8e490c01d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:09:44.391394    9652 certs.go:337] copying /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/apiserver.crt
	I0213 15:09:44.391561    9652 certs.go:341] copying /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/apiserver.key
	I0213 15:09:44.391728    9652 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/proxy-client.key
	I0213 15:09:44.391741    9652 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/proxy-client.crt with IP's: []
	I0213 15:09:44.443736    9652 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/proxy-client.crt ...
	I0213 15:09:44.443746    9652 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/proxy-client.crt: {Name:mk535a8c35161968c1bcf86ff771ade1a2f92e4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:09:44.443989    9652 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/proxy-client.key ...
	I0213 15:09:44.444002    9652 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/proxy-client.key: {Name:mk310d5595f841f8bcf734e06d61de18e94b68ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:09:44.444193    9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0213 15:09:44.444222    9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0213 15:09:44.444240    9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0213 15:09:44.444262    9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0213 15:09:44.444282    9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0213 15:09:44.444299    9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0213 15:09:44.444316    9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0213 15:09:44.444332    9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0213 15:09:44.444416    9652 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/6776.pem (1338 bytes)
	W0213 15:09:44.444461    9652 certs.go:433] ignoring /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/6776_empty.pem, impossibly tiny 0 bytes
	I0213 15:09:44.444492    9652 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 15:09:44.444533    9652 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem (1078 bytes)
	I0213 15:09:44.444567    9652 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem (1123 bytes)
	I0213 15:09:44.444600    9652 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/key.pem (1675 bytes)
	I0213 15:09:44.444675    9652 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem (1708 bytes)
	I0213 15:09:44.444712    9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0213 15:09:44.444733    9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/6776.pem -> /usr/share/ca-certificates/6776.pem
	I0213 15:09:44.444757    9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem -> /usr/share/ca-certificates/67762.pem
	I0213 15:09:44.445275    9652 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 15:09:44.486936    9652 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0213 15:09:44.527814    9652 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 15:09:44.568540    9652 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0213 15:09:44.608976    9652 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 15:09:44.649834    9652 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0213 15:09:44.690497    9652 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 15:09:44.732137    9652 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 15:09:44.772902    9652 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 15:09:44.813586    9652 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/6776.pem --> /usr/share/ca-certificates/6776.pem (1338 bytes)
	I0213 15:09:44.854022    9652 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem --> /usr/share/ca-certificates/67762.pem (1708 bytes)
	I0213 15:09:44.893781    9652 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 15:09:44.922662    9652 ssh_runner.go:195] Run: openssl version
	I0213 15:09:44.928700    9652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 15:09:44.944476    9652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 15:09:44.948773    9652 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 22:54 /usr/share/ca-certificates/minikubeCA.pem
	I0213 15:09:44.948818    9652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 15:09:44.955327    9652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 15:09:44.970803    9652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6776.pem && ln -fs /usr/share/ca-certificates/6776.pem /etc/ssl/certs/6776.pem"
	I0213 15:09:44.986741    9652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6776.pem
	I0213 15:09:44.991382    9652 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 23:02 /usr/share/ca-certificates/6776.pem
	I0213 15:09:44.991462    9652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6776.pem
	I0213 15:09:44.998301    9652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6776.pem /etc/ssl/certs/51391683.0"
	I0213 15:09:45.014793    9652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67762.pem && ln -fs /usr/share/ca-certificates/67762.pem /etc/ssl/certs/67762.pem"
	I0213 15:09:45.031463    9652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67762.pem
	I0213 15:09:45.035966    9652 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 23:02 /usr/share/ca-certificates/67762.pem
	I0213 15:09:45.036018    9652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67762.pem
	I0213 15:09:45.042714    9652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67762.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 15:09:45.058336    9652 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 15:09:45.062598    9652 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0213 15:09:45.062644    9652 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-181000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-181000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:09:45.062743    9652 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 15:09:45.081562    9652 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 15:09:45.096841    9652 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 15:09:45.111773    9652 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0213 15:09:45.111836    9652 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 15:09:45.127639    9652 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 15:09:45.127683    9652 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0213 15:09:45.179099    9652 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0213 15:09:45.179154    9652 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 15:09:45.453725    9652 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 15:09:45.453894    9652 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 15:09:45.454004    9652 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 15:09:45.619405    9652 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 15:09:45.619894    9652 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 15:09:45.619941    9652 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 15:09:45.697064    9652 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 15:09:45.743263    9652 out.go:204]   - Generating certificates and keys ...
	I0213 15:09:45.743330    9652 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 15:09:45.743387    9652 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 15:09:46.016890    9652 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0213 15:09:46.135012    9652 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0213 15:09:46.240726    9652 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0213 15:09:46.483591    9652 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0213 15:09:46.642561    9652 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0213 15:09:46.642742    9652 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-181000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0213 15:09:46.816467    9652 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0213 15:09:46.816693    9652 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-181000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0213 15:09:46.989837    9652 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0213 15:09:47.071238    9652 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0213 15:09:47.260438    9652 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0213 15:09:47.260535    9652 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 15:09:47.436955    9652 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 15:09:47.666880    9652 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 15:09:47.758533    9652 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 15:09:47.806297    9652 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 15:09:47.806781    9652 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 15:09:47.829513    9652 out.go:204]   - Booting up control plane ...
	I0213 15:09:47.829592    9652 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 15:09:47.829680    9652 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 15:09:47.829743    9652 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 15:09:47.829813    9652 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 15:09:47.829940    9652 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 15:10:27.817636    9652 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0213 15:10:27.818625    9652 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 15:10:27.818860    9652 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 15:10:32.819769    9652 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 15:10:32.819944    9652 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 15:10:42.821271    9652 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 15:10:42.821553    9652 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 15:11:02.824119    9652 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 15:11:02.824346    9652 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 15:11:42.824226    9652 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 15:11:42.824404    9652 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 15:11:42.824427    9652 kubeadm.go:322] 
	I0213 15:11:42.824470    9652 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0213 15:11:42.824512    9652 kubeadm.go:322] 		timed out waiting for the condition
	I0213 15:11:42.824517    9652 kubeadm.go:322] 
	I0213 15:11:42.824568    9652 kubeadm.go:322] 	This error is likely caused by:
	I0213 15:11:42.824597    9652 kubeadm.go:322] 		- The kubelet is not running
	I0213 15:11:42.824706    9652 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0213 15:11:42.824719    9652 kubeadm.go:322] 
	I0213 15:11:42.824813    9652 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0213 15:11:42.824845    9652 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0213 15:11:42.824881    9652 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0213 15:11:42.824890    9652 kubeadm.go:322] 
	I0213 15:11:42.824994    9652 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0213 15:11:42.825063    9652 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0213 15:11:42.825076    9652 kubeadm.go:322] 
	I0213 15:11:42.825145    9652 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0213 15:11:42.825182    9652 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0213 15:11:42.825241    9652 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0213 15:11:42.825274    9652 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0213 15:11:42.825282    9652 kubeadm.go:322] 
	I0213 15:11:42.829395    9652 kubeadm.go:322] W0213 23:09:45.178564    1705 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0213 15:11:42.829632    9652 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0213 15:11:42.829728    9652 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0213 15:11:42.829842    9652 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I0213 15:11:42.829920    9652 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 15:11:42.830013    9652 kubeadm.go:322] W0213 23:09:47.812069    1705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0213 15:11:42.830101    9652 kubeadm.go:322] W0213 23:09:47.812889    1705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0213 15:11:42.830166    9652 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0213 15:11:42.830227    9652 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0213 15:11:42.830334    9652 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-181000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-181000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0213 23:09:45.178564    1705 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0213 23:09:47.812069    1705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0213 23:09:47.812889    1705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-181000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-181000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0213 23:09:45.178564    1705 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0213 23:09:47.812069    1705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0213 23:09:47.812889    1705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0213 15:11:42.830371    9652 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0213 15:11:43.257072    9652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 15:11:43.274286    9652 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0213 15:11:43.274356    9652 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 15:11:43.289388    9652 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 15:11:43.289416    9652 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0213 15:11:43.342688    9652 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0213 15:11:43.342760    9652 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 15:11:43.576339    9652 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 15:11:43.576432    9652 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 15:11:43.576523    9652 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 15:11:43.742890    9652 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 15:11:43.743426    9652 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 15:11:43.743465    9652 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 15:11:43.815304    9652 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 15:11:43.836993    9652 out.go:204]   - Generating certificates and keys ...
	I0213 15:11:43.837120    9652 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 15:11:43.837183    9652 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 15:11:43.837300    9652 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 15:11:43.837348    9652 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 15:11:43.837400    9652 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 15:11:43.837441    9652 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 15:11:43.837494    9652 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 15:11:43.837543    9652 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 15:11:43.837624    9652 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 15:11:43.837743    9652 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 15:11:43.837779    9652 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 15:11:43.837819    9652 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 15:11:43.863969    9652 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 15:11:44.043777    9652 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 15:11:44.153406    9652 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 15:11:44.234967    9652 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 15:11:44.235380    9652 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 15:11:44.256840    9652 out.go:204]   - Booting up control plane ...
	I0213 15:11:44.256906    9652 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 15:11:44.256961    9652 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 15:11:44.257003    9652 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 15:11:44.257068    9652 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 15:11:44.257201    9652 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 15:12:24.244520    9652 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0213 15:12:24.245417    9652 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 15:12:24.245554    9652 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 15:12:29.247099    9652 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 15:12:29.247344    9652 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 15:12:39.249051    9652 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 15:12:39.249203    9652 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 15:12:59.250589    9652 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 15:12:59.250769    9652 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 15:13:39.252723    9652 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 15:13:39.252897    9652 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 15:13:39.252911    9652 kubeadm.go:322] 
	I0213 15:13:39.252940    9652 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0213 15:13:39.252988    9652 kubeadm.go:322] 		timed out waiting for the condition
	I0213 15:13:39.252999    9652 kubeadm.go:322] 
	I0213 15:13:39.253026    9652 kubeadm.go:322] 	This error is likely caused by:
	I0213 15:13:39.253051    9652 kubeadm.go:322] 		- The kubelet is not running
	I0213 15:13:39.253146    9652 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0213 15:13:39.253159    9652 kubeadm.go:322] 
	I0213 15:13:39.253243    9652 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0213 15:13:39.253285    9652 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0213 15:13:39.253339    9652 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0213 15:13:39.253347    9652 kubeadm.go:322] 
	I0213 15:13:39.253433    9652 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0213 15:13:39.253506    9652 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0213 15:13:39.253512    9652 kubeadm.go:322] 
	I0213 15:13:39.253578    9652 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0213 15:13:39.253617    9652 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0213 15:13:39.253674    9652 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0213 15:13:39.253705    9652 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0213 15:13:39.253713    9652 kubeadm.go:322] 
	I0213 15:13:39.257612    9652 kubeadm.go:322] W0213 23:11:43.341596    4705 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0213 15:13:39.257749    9652 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0213 15:13:39.257822    9652 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0213 15:13:39.257931    9652 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I0213 15:13:39.258022    9652 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 15:13:39.258115    9652 kubeadm.go:322] W0213 23:11:44.240004    4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0213 15:13:39.258206    9652 kubeadm.go:322] W0213 23:11:44.240685    4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0213 15:13:39.258266    9652 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0213 15:13:39.258333    9652 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0213 15:13:39.258360    9652 kubeadm.go:406] StartCluster complete in 3m54.198478239s
	I0213 15:13:39.258445    9652 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:13:39.276597    9652 logs.go:276] 0 containers: []
	W0213 15:13:39.276618    9652 logs.go:278] No container was found matching "kube-apiserver"
	I0213 15:13:39.276710    9652 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:13:39.294670    9652 logs.go:276] 0 containers: []
	W0213 15:13:39.294684    9652 logs.go:278] No container was found matching "etcd"
	I0213 15:13:39.294757    9652 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:13:39.312405    9652 logs.go:276] 0 containers: []
	W0213 15:13:39.312419    9652 logs.go:278] No container was found matching "coredns"
	I0213 15:13:39.312487    9652 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:13:39.330699    9652 logs.go:276] 0 containers: []
	W0213 15:13:39.330712    9652 logs.go:278] No container was found matching "kube-scheduler"
	I0213 15:13:39.330788    9652 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:13:39.348796    9652 logs.go:276] 0 containers: []
	W0213 15:13:39.348809    9652 logs.go:278] No container was found matching "kube-proxy"
	I0213 15:13:39.348887    9652 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:13:39.365395    9652 logs.go:276] 0 containers: []
	W0213 15:13:39.365408    9652 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 15:13:39.365479    9652 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:13:39.382273    9652 logs.go:276] 0 containers: []
	W0213 15:13:39.382286    9652 logs.go:278] No container was found matching "kindnet"
	I0213 15:13:39.382294    9652 logs.go:123] Gathering logs for Docker ...
	I0213 15:13:39.382302    9652 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:13:39.403643    9652 logs.go:123] Gathering logs for container status ...
	I0213 15:13:39.403657    9652 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:13:39.464895    9652 logs.go:123] Gathering logs for kubelet ...
	I0213 15:13:39.464909    9652 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:13:39.507120    9652 logs.go:123] Gathering logs for dmesg ...
	I0213 15:13:39.507136    9652 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:13:39.526527    9652 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:13:39.526541    9652 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 15:13:39.586905    9652 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0213 15:13:39.586936    9652 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0213 23:11:43.341596    4705 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0213 23:11:44.240004    4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0213 23:11:44.240685    4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0213 15:13:39.586956    9652 out.go:239] * 
	* 
	W0213 15:13:39.587002    9652 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0213 23:11:43.341596    4705 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0213 23:11:44.240004    4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0213 23:11:44.240685    4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0213 23:11:43.341596    4705 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0213 23:11:44.240004    4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0213 23:11:44.240685    4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0213 15:13:39.587017    9652 out.go:239] * 
	* 
	W0213 15:13:39.587644    9652 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:13:39.674487    9652 out.go:177] 
	W0213 15:13:39.717390    9652 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0213 23:11:43.341596    4705 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0213 23:11:44.240004    4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0213 23:11:44.240685    4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0213 23:11:43.341596    4705 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0213 23:11:44.240004    4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0213 23:11:44.240685    4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0213 15:13:39.717451    9652 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0213 15:13:39.717517    9652 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0213 15:13:39.759415    9652 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-181000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (276.31s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (87.02s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-181000 addons enable ingress --alsologtostderr -v=5
E0213 15:14:17.228025    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-181000 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m26.534694608s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:13:39.901301    9824 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:13:39.902158    9824 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:13:39.902165    9824 out.go:304] Setting ErrFile to fd 2...
	I0213 15:13:39.902169    9824 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:13:39.902361    9824 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18169-6320/.minikube/bin
	I0213 15:13:39.902717    9824 mustload.go:65] Loading cluster: ingress-addon-legacy-181000
	I0213 15:13:39.903042    9824 config.go:182] Loaded profile config "ingress-addon-legacy-181000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0213 15:13:39.903058    9824 addons.go:597] checking whether the cluster is paused
	I0213 15:13:39.903136    9824 config.go:182] Loaded profile config "ingress-addon-legacy-181000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0213 15:13:39.903151    9824 host.go:66] Checking if "ingress-addon-legacy-181000" exists ...
	I0213 15:13:39.903534    9824 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-181000 --format={{.State.Status}}
	I0213 15:13:39.953012    9824 ssh_runner.go:195] Run: systemctl --version
	I0213 15:13:39.953106    9824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-181000
	I0213 15:13:40.003609    9824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53249 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/ingress-addon-legacy-181000/id_rsa Username:docker}
	I0213 15:13:40.097069    9824 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 15:13:40.137348    9824 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0213 15:13:40.158975    9824 config.go:182] Loaded profile config "ingress-addon-legacy-181000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0213 15:13:40.158989    9824 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-181000"
	I0213 15:13:40.158997    9824 addons.go:234] Setting addon ingress=true in "ingress-addon-legacy-181000"
	I0213 15:13:40.159024    9824 host.go:66] Checking if "ingress-addon-legacy-181000" exists ...
	I0213 15:13:40.159331    9824 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-181000 --format={{.State.Status}}
	I0213 15:13:40.235265    9824 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0213 15:13:40.257426    9824 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I0213 15:13:40.278975    9824 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0213 15:13:40.306509    9824 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0213 15:13:40.327191    9824 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0213 15:13:40.327213    9824 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I0213 15:13:40.327333    9824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-181000
	I0213 15:13:40.377755    9824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53249 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/ingress-addon-legacy-181000/id_rsa Username:docker}
	I0213 15:13:40.494651    9824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0213 15:13:40.641739    9824 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:13:40.641766    9824 retry.go:31] will retry after 258.097755ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:13:40.900194    9824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0213 15:13:40.959672    9824 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:13:40.959698    9824 retry.go:31] will retry after 285.593534ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:13:41.245628    9824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0213 15:13:41.300065    9824 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:13:41.300084    9824 retry.go:31] will retry after 512.536502ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:13:41.812826    9824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0213 15:13:41.875558    9824 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:13:41.875583    9824 retry.go:31] will retry after 607.530973ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:13:42.483560    9824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0213 15:13:42.544295    9824 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:13:42.544315    9824 retry.go:31] will retry after 1.485178803s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:13:44.030418    9824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0213 15:13:44.087453    9824 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:13:44.087481    9824 retry.go:31] will retry after 2.811491536s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:13:46.899451    9824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0213 15:13:46.958938    9824 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:13:46.958958    9824 retry.go:31] will retry after 1.661904989s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:13:48.622198    9824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0213 15:13:48.688457    9824 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:13:48.688475    9824 retry.go:31] will retry after 4.23638226s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:13:52.924985    9824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0213 15:13:53.000879    9824 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:13:53.000896    9824 retry.go:31] will retry after 6.827259321s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:13:59.828592    9824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0213 15:13:59.886765    9824 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:13:59.886782    9824 retry.go:31] will retry after 5.565041649s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:14:05.453696    9824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0213 15:14:05.510993    9824 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:14:05.511027    9824 retry.go:31] will retry after 8.076132401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:14:13.587755    9824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0213 15:14:13.653687    9824 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:14:13.653703    9824 retry.go:31] will retry after 29.517231575s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:14:43.170753    9824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0213 15:14:43.225980    9824 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:14:43.225996    9824 retry.go:31] will retry after 22.949048907s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:15:06.183231    9824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0213 15:15:06.247475    9824 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:15:06.247505    9824 addons.go:470] Verifying addon ingress=true in "ingress-addon-legacy-181000"
	I0213 15:15:06.268831    9824 out.go:177] * Verifying ingress addon...
	I0213 15:15:06.310882    9824 out.go:177] 
	W0213 15:15:06.331891    9824 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-181000" does not exist: client config: context "ingress-addon-legacy-181000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-181000" does not exist: client config: context "ingress-addon-legacy-181000" does not exist]
	W0213 15:15:06.331919    9824 out.go:239] * 
	* 
	W0213 15:15:06.338673    9824 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:15:06.359879    9824 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-181000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-181000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cccacda9b19a413941ec0f20ee8e46ee9c6022ad4900699e65f44aeaba72bcd3",
	        "Created": "2024-02-13T23:09:26.558440668Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 59926,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-13T23:09:26.75414569Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/cccacda9b19a413941ec0f20ee8e46ee9c6022ad4900699e65f44aeaba72bcd3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cccacda9b19a413941ec0f20ee8e46ee9c6022ad4900699e65f44aeaba72bcd3/hostname",
	        "HostsPath": "/var/lib/docker/containers/cccacda9b19a413941ec0f20ee8e46ee9c6022ad4900699e65f44aeaba72bcd3/hosts",
	        "LogPath": "/var/lib/docker/containers/cccacda9b19a413941ec0f20ee8e46ee9c6022ad4900699e65f44aeaba72bcd3/cccacda9b19a413941ec0f20ee8e46ee9c6022ad4900699e65f44aeaba72bcd3-json.log",
	        "Name": "/ingress-addon-legacy-181000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-181000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-181000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a704f9949dc660981b6c4742a6ee76a82030cc9ef1cfb4c88086f2d76538b6c3-init/diff:/var/lib/docker/overlay2/17d01b22a52da825ae58e67decfe3f4c8ae2f6fe80510c1be556e233e058ce7a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a704f9949dc660981b6c4742a6ee76a82030cc9ef1cfb4c88086f2d76538b6c3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a704f9949dc660981b6c4742a6ee76a82030cc9ef1cfb4c88086f2d76538b6c3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a704f9949dc660981b6c4742a6ee76a82030cc9ef1cfb4c88086f2d76538b6c3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-181000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-181000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-181000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-181000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-181000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dfd92e9e88d73de7a78015de8fc26617c1d825183139de2670ee6ca0690697d7",
	            "SandboxKey": "/var/run/docker/netns/dfd92e9e88d7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53249"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53250"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53251"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53252"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53253"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-181000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "cccacda9b19a",
	                        "ingress-addon-legacy-181000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "fc69d5909d8ff616a13bdca0728fa605a5fdcd31701ff62b9278b3e78dfe2543",
	                    "EndpointID": "901703549c6846a31b57414236bd57a6d56f5c1d4f06f75f7d380938da61bf9a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ingress-addon-legacy-181000",
	                        "cccacda9b19a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-181000 -n ingress-addon-legacy-181000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-181000 -n ingress-addon-legacy-181000: exit status 6 (434.084865ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 15:15:06.849281    9874 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-181000" does not appear in /Users/jenkins/minikube-integration/18169-6320/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-181000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (87.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (114.33s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-181000 addons enable ingress-dns --alsologtostderr -v=5
E0213 15:15:14.212186    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 15:15:41.911137    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-181000 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m53.884830277s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:15:06.931333    9884 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:15:06.931629    9884 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:15:06.931634    9884 out.go:304] Setting ErrFile to fd 2...
	I0213 15:15:06.931638    9884 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:15:06.931828    9884 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18169-6320/.minikube/bin
	I0213 15:15:06.932448    9884 mustload.go:65] Loading cluster: ingress-addon-legacy-181000
	I0213 15:15:06.932734    9884 config.go:182] Loaded profile config "ingress-addon-legacy-181000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0213 15:15:06.932749    9884 addons.go:597] checking whether the cluster is paused
	I0213 15:15:06.932835    9884 config.go:182] Loaded profile config "ingress-addon-legacy-181000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0213 15:15:06.932850    9884 host.go:66] Checking if "ingress-addon-legacy-181000" exists ...
	I0213 15:15:06.933308    9884 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-181000 --format={{.State.Status}}
	I0213 15:15:06.984643    9884 ssh_runner.go:195] Run: systemctl --version
	I0213 15:15:06.984738    9884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-181000
	I0213 15:15:07.036498    9884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53249 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/ingress-addon-legacy-181000/id_rsa Username:docker}
	I0213 15:15:07.132432    9884 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 15:15:07.173424    9884 out.go:177] * ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0213 15:15:07.195080    9884 config.go:182] Loaded profile config "ingress-addon-legacy-181000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0213 15:15:07.195094    9884 addons.go:69] Setting ingress-dns=true in profile "ingress-addon-legacy-181000"
	I0213 15:15:07.195102    9884 addons.go:234] Setting addon ingress-dns=true in "ingress-addon-legacy-181000"
	I0213 15:15:07.195129    9884 host.go:66] Checking if "ingress-addon-legacy-181000" exists ...
	I0213 15:15:07.195437    9884 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-181000 --format={{.State.Status}}
	I0213 15:15:07.265737    9884 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0213 15:15:07.287171    9884 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0213 15:15:07.309171    9884 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0213 15:15:07.309202    9884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0213 15:15:07.309344    9884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-181000
	I0213 15:15:07.359988    9884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53249 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/ingress-addon-legacy-181000/id_rsa Username:docker}
	I0213 15:15:07.476827    9884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0213 15:15:07.535284    9884 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:15:07.535312    9884 retry.go:31] will retry after 291.890246ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:15:07.828384    9884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0213 15:15:07.888366    9884 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:15:07.888403    9884 retry.go:31] will retry after 507.540922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:15:08.398441    9884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0213 15:15:08.464222    9884 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:15:08.464239    9884 retry.go:31] will retry after 523.819962ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:15:08.988687    9884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0213 15:15:09.046664    9884 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:15:09.046690    9884 retry.go:31] will retry after 465.994237ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:15:09.512969    9884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0213 15:15:09.576498    9884 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:15:09.576516    9884 retry.go:31] will retry after 1.790237131s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:15:11.367627    9884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0213 15:15:11.461842    9884 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:15:11.461874    9884 retry.go:31] will retry after 1.497924391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:15:12.960237    9884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0213 15:15:13.024815    9884 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:15:13.024838    9884 retry.go:31] will retry after 1.824852565s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:15:14.850238    9884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0213 15:15:14.913351    9884 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:15:14.913368    9884 retry.go:31] will retry after 5.729835451s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:15:20.643901    9884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0213 15:15:20.703183    9884 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:15:20.703202    9884 retry.go:31] will retry after 6.20564999s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:15:26.911486    9884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0213 15:15:26.971610    9884 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:15:26.971626    9884 retry.go:31] will retry after 8.337026795s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:15:35.309113    9884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0213 15:15:35.372677    9884 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:15:35.372695    9884 retry.go:31] will retry after 14.149740582s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:15:49.524777    9884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0213 15:15:49.587925    9884 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:15:49.587948    9884 retry.go:31] will retry after 23.832098215s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:16:13.420818    9884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0213 15:16:13.475441    9884 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:16:13.475457    9884 retry.go:31] will retry after 47.134828661s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:17:00.610152    9884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0213 15:17:00.669255    9884 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0213 15:17:00.691120    9884 out.go:177] 
	W0213 15:17:00.711960    9884 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0213 15:17:00.711995    9884 out.go:239] * 
	* 
	W0213 15:17:00.714982    9884 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:17:00.735940    9884 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-181000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-181000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cccacda9b19a413941ec0f20ee8e46ee9c6022ad4900699e65f44aeaba72bcd3",
	        "Created": "2024-02-13T23:09:26.558440668Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 59926,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-13T23:09:26.75414569Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/cccacda9b19a413941ec0f20ee8e46ee9c6022ad4900699e65f44aeaba72bcd3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cccacda9b19a413941ec0f20ee8e46ee9c6022ad4900699e65f44aeaba72bcd3/hostname",
	        "HostsPath": "/var/lib/docker/containers/cccacda9b19a413941ec0f20ee8e46ee9c6022ad4900699e65f44aeaba72bcd3/hosts",
	        "LogPath": "/var/lib/docker/containers/cccacda9b19a413941ec0f20ee8e46ee9c6022ad4900699e65f44aeaba72bcd3/cccacda9b19a413941ec0f20ee8e46ee9c6022ad4900699e65f44aeaba72bcd3-json.log",
	        "Name": "/ingress-addon-legacy-181000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-181000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-181000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a704f9949dc660981b6c4742a6ee76a82030cc9ef1cfb4c88086f2d76538b6c3-init/diff:/var/lib/docker/overlay2/17d01b22a52da825ae58e67decfe3f4c8ae2f6fe80510c1be556e233e058ce7a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a704f9949dc660981b6c4742a6ee76a82030cc9ef1cfb4c88086f2d76538b6c3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a704f9949dc660981b6c4742a6ee76a82030cc9ef1cfb4c88086f2d76538b6c3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a704f9949dc660981b6c4742a6ee76a82030cc9ef1cfb4c88086f2d76538b6c3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-181000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-181000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-181000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-181000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-181000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dfd92e9e88d73de7a78015de8fc26617c1d825183139de2670ee6ca0690697d7",
	            "SandboxKey": "/var/run/docker/netns/dfd92e9e88d7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53249"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53250"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53251"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53252"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53253"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-181000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "cccacda9b19a",
	                        "ingress-addon-legacy-181000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "fc69d5909d8ff616a13bdca0728fa605a5fdcd31701ff62b9278b3e78dfe2543",
	                    "EndpointID": "901703549c6846a31b57414236bd57a6d56f5c1d4f06f75f7d380938da61bf9a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ingress-addon-legacy-181000",
	                        "cccacda9b19a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-181000 -n ingress-addon-legacy-181000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-181000 -n ingress-addon-legacy-181000: exit status 6 (388.432908ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 15:17:01.184315    9907 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-181000" does not appear in /Users/jenkins/minikube-integration/18169-6320/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-181000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (114.33s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.44s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:201: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-181000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-181000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cccacda9b19a413941ec0f20ee8e46ee9c6022ad4900699e65f44aeaba72bcd3",
	        "Created": "2024-02-13T23:09:26.558440668Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 59926,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-13T23:09:26.75414569Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/cccacda9b19a413941ec0f20ee8e46ee9c6022ad4900699e65f44aeaba72bcd3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cccacda9b19a413941ec0f20ee8e46ee9c6022ad4900699e65f44aeaba72bcd3/hostname",
	        "HostsPath": "/var/lib/docker/containers/cccacda9b19a413941ec0f20ee8e46ee9c6022ad4900699e65f44aeaba72bcd3/hosts",
	        "LogPath": "/var/lib/docker/containers/cccacda9b19a413941ec0f20ee8e46ee9c6022ad4900699e65f44aeaba72bcd3/cccacda9b19a413941ec0f20ee8e46ee9c6022ad4900699e65f44aeaba72bcd3-json.log",
	        "Name": "/ingress-addon-legacy-181000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-181000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-181000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a704f9949dc660981b6c4742a6ee76a82030cc9ef1cfb4c88086f2d76538b6c3-init/diff:/var/lib/docker/overlay2/17d01b22a52da825ae58e67decfe3f4c8ae2f6fe80510c1be556e233e058ce7a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a704f9949dc660981b6c4742a6ee76a82030cc9ef1cfb4c88086f2d76538b6c3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a704f9949dc660981b6c4742a6ee76a82030cc9ef1cfb4c88086f2d76538b6c3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a704f9949dc660981b6c4742a6ee76a82030cc9ef1cfb4c88086f2d76538b6c3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-181000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-181000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-181000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-181000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-181000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dfd92e9e88d73de7a78015de8fc26617c1d825183139de2670ee6ca0690697d7",
	            "SandboxKey": "/var/run/docker/netns/dfd92e9e88d7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53249"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53250"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53251"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53252"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53253"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-181000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "cccacda9b19a",
	                        "ingress-addon-legacy-181000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "fc69d5909d8ff616a13bdca0728fa605a5fdcd31701ff62b9278b3e78dfe2543",
	                    "EndpointID": "901703549c6846a31b57414236bd57a6d56f5c1d4f06f75f7d380938da61bf9a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ingress-addon-legacy-181000",
	                        "cccacda9b19a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-181000 -n ingress-addon-legacy-181000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-181000 -n ingress-addon-legacy-181000: exit status 6 (388.957895ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 15:17:01.625247    9919 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-181000" does not appear in /Users/jenkins/minikube-integration/18169-6320/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-181000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.44s)

                                                
                                    
x
+
TestSkaffold (319.24s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe2870327310 version
skaffold_test.go:59: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe2870327310 version: (1.916679511s)
skaffold_test.go:63: skaffold version: v2.10.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-768000 --memory=2600 --driver=docker 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-768000 --memory=2600 --driver=docker : (23.498970699s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe2870327310 run --minikube-profile skaffold-768000 --kube-context skaffold-768000 --status-check=true --port-forward=false --interactive=false
E0213 15:34:17.244519    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
E0213 15:35:14.221157    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 15:37:20.291892    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe2870327310 run --minikube-profile skaffold-768000 --kube-context skaffold-768000 --status-check=true --port-forward=false --interactive=false: signal: killed (4m41.740173625s)

                                                
                                                
-- stdout --
	Generating tags...
	 - leeroy-web -> leeroy-web:latest
	 - leeroy-app -> leeroy-app:latest
	 - base -> base:latest
	Some taggers failed. Rerun with -vdebug for errors.
	Checking cache...
	 - leeroy-web: Not found. Building
	 - leeroy-app: Not found. Building
	 - base: Not found. Building
	Starting build...
	Found [skaffold-768000] context, using local docker daemon.
	Building [base]...
	Target platforms: [linux/amd64]
	#0 building with "default" instance using docker driver
	
	#1 [internal] load .dockerignore
	#1 transferring context: 2B done
	#1 DONE 0.0s
	
	#2 [internal] load build definition from Dockerfile
	#2 transferring dockerfile: 250B done
	#2 DONE 0.0s
	
	#3 [internal] load metadata for gcr.io/distroless/base:latest
	#3 DONE 2.5s
	
	#4 [1/1] FROM gcr.io/distroless/base@sha256:9d4e5680d67c984ac9c957f66405de25634012e2d5d6dc396c4bdd2ba6ae569f
	#4 resolve gcr.io/distroless/base@sha256:9d4e5680d67c984ac9c957f66405de25634012e2d5d6dc396c4bdd2ba6ae569f done
	#4 sha256:6b16ad2aede1c00fe5f9765419c2165fd72902e768db3126ee68d127cae394ea 0B / 103.78kB 0.1s
	#4 sha256:9d4e5680d67c984ac9c957f66405de25634012e2d5d6dc396c4bdd2ba6ae569f 1.51kB / 1.51kB done
	#4 sha256:c8500b45821ad3ad625d1689bbe0fd12ca31d22865fbf19cc2e982f759ae2133 1.60kB / 1.60kB done
	#4 sha256:fe5ca62666f04366c8e7f605aa82997d71320183e99962fa76b3209fdfbb8b58 0B / 21.20kB 0.1s
	#4 sha256:13190661cbc681abf8c1f3546231bb1ff46c88ce4750a2818426c6e493a09163 2.12kB / 2.12kB done
	#4 sha256:be1681d2fb7c6bc072dddd952d4fa0428a3a3c60b53cdde852e30aaa86f7e1ab 0B / 755.29kB 0.1s
	#4 sha256:6b16ad2aede1c00fe5f9765419c2165fd72902e768db3126ee68d127cae394ea 103.78kB / 103.78kB 0.9s done
	#4 extracting sha256:6b16ad2aede1c00fe5f9765419c2165fd72902e768db3126ee68d127cae394ea done
	#4 sha256:fcb6f6d2c9986d9cd6a2ea3cc2936e5fc613e09f1af9042329011e43057f3265 0B / 317B 0.9s
	#4 sha256:fe5ca62666f04366c8e7f605aa82997d71320183e99962fa76b3209fdfbb8b58 21.20kB / 21.20kB 1.1s done
	#4 extracting sha256:fe5ca62666f04366c8e7f605aa82997d71320183e99962fa76b3209fdfbb8b58 done
	#4 sha256:e8c73c638ae9ec5ad70c49df7e484040d889cca6b4a9af056579c3d058ea93f0 0B / 198B 1.2s
	#4 sha256:fcb6f6d2c9986d9cd6a2ea3cc2936e5fc613e09f1af9042329011e43057f3265 317B / 317B 1.3s done
	#4 sha256:1e3d9b7d145208fa8fa3ee1c9612d0adaac7255f1bbc9ddea7e461e0b317805c 0B / 113B 1.4s
	#4 sha256:e8c73c638ae9ec5ad70c49df7e484040d889cca6b4a9af056579c3d058ea93f0 198B / 198B 1.5s done
	#4 sha256:4aa0ea1413d37a58615488592a0b827ea4b2e48fa5a77cf707d0e35f025e613f 0B / 385B 1.6s
	#4 sha256:1e3d9b7d145208fa8fa3ee1c9612d0adaac7255f1bbc9ddea7e461e0b317805c 113B / 113B 1.7s
	#4 extracting sha256:be1681d2fb7c6bc072dddd952d4fa0428a3a3c60b53cdde852e30aaa86f7e1ab
	#4 sha256:be1681d2fb7c6bc072dddd952d4fa0428a3a3c60b53cdde852e30aaa86f7e1ab 755.29kB / 755.29kB 1.8s done
	#4 sha256:1e3d9b7d145208fa8fa3ee1c9612d0adaac7255f1bbc9ddea7e461e0b317805c 113B / 113B 1.7s done
	#4 sha256:5627a970d25e752d971a501ec7e35d0d6fdcd4a3ce9e958715a686853024794a 0B / 130.56kB 1.9s
	#4 sha256:7c881f9ab25e0d86562a123b5fb56aebf8aa0ddd7d48ef602faf8d1e7cf43d8c 0B / 355B 1.9s
	#4 sha256:4aa0ea1413d37a58615488592a0b827ea4b2e48fa5a77cf707d0e35f025e613f 385B / 385B 1.9s done
	#4 sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 0B / 5.85MB 2.0s
	#4 sha256:5627a970d25e752d971a501ec7e35d0d6fdcd4a3ce9e958715a686853024794a 130.56kB / 130.56kB 2.2s done
	#4 sha256:7c881f9ab25e0d86562a123b5fb56aebf8aa0ddd7d48ef602faf8d1e7cf43d8c 355B / 355B 2.1s done
	#4 sha256:ebba9ccde3efe3177f5a74772e6e85446e7cbad9528c1c169e403a1981429d14 0B / 2.06MB 2.3s
	#4 sha256:1933f300df8c747385bc1e9a261b9fc7ec89b0c02b51439a3759344a643a4bb9 0B / 968.57kB 2.3s
	#4 extracting sha256:be1681d2fb7c6bc072dddd952d4fa0428a3a3c60b53cdde852e30aaa86f7e1ab 0.7s done
	#4 extracting sha256:fcb6f6d2c9986d9cd6a2ea3cc2936e5fc613e09f1af9042329011e43057f3265
	#4 sha256:ebba9ccde3efe3177f5a74772e6e85446e7cbad9528c1c169e403a1981429d14 1.05MB / 2.06MB 2.7s
	#4 extracting sha256:fcb6f6d2c9986d9cd6a2ea3cc2936e5fc613e09f1af9042329011e43057f3265 done
	#4 extracting sha256:e8c73c638ae9ec5ad70c49df7e484040d889cca6b4a9af056579c3d058ea93f0 done
	#4 extracting sha256:1e3d9b7d145208fa8fa3ee1c9612d0adaac7255f1bbc9ddea7e461e0b317805c done
	#4 extracting sha256:4aa0ea1413d37a58615488592a0b827ea4b2e48fa5a77cf707d0e35f025e613f done
	#4 extracting sha256:7c881f9ab25e0d86562a123b5fb56aebf8aa0ddd7d48ef602faf8d1e7cf43d8c done
	#4 extracting sha256:5627a970d25e752d971a501ec7e35d0d6fdcd4a3ce9e958715a686853024794a done
	#4 sha256:ebba9ccde3efe3177f5a74772e6e85446e7cbad9528c1c169e403a1981429d14 2.06MB / 2.06MB 2.8s done
	#4 sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 2.10MB / 5.85MB 3.3s
	#4 sha256:1933f300df8c747385bc1e9a261b9fc7ec89b0c02b51439a3759344a643a4bb9 968.57kB / 968.57kB 3.1s done
	#4 sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 4.19MB / 5.85MB 3.5s
	#4 sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 5.85MB / 5.85MB 3.5s done
	#4 extracting sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 0.1s
	#4 extracting sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 0.2s done
	#4 extracting sha256:ebba9ccde3efe3177f5a74772e6e85446e7cbad9528c1c169e403a1981429d14 0.0s done
	#4 extracting sha256:1933f300df8c747385bc1e9a261b9fc7ec89b0c02b51439a3759344a643a4bb9 0.0s done
	#4 DONE 4.0s
	
	#5 exporting to image
	#5 exporting layers done
	#5 writing image sha256:5f752032c428c256e52e9ea36859d8a31baf0c90d5f772e2f2208a0109ebc789 done
	#5 naming to docker.io/library/base:latest done
	#5 DONE 0.0s
	
	What's Next?
	  1. Sign in to your Docker account → docker login
	  2. View a summary of image vulnerabilities and recommendations → docker scout quickview
	Build [base] succeeded
	Building [leeroy-app]...
	Target platforms: [linux/amd64]
	#0 building with "default" instance using docker driver
	
	#1 [internal] load .dockerignore
	#1 transferring context: 2B done
	#1 DONE 0.0s
	
	#2 [internal] load build definition from Dockerfile
	#2 transferring dockerfile: 326B done
	#2 DONE 0.0s
	
	#3 [internal] load metadata for docker.io/library/base:5f752032c428c256e52e9ea36859d8a31baf0c90d5f772e2f2208a0109ebc789
	#3 DONE 0.0s
	
	#4 [internal] load metadata for docker.io/library/golang:1.18
	#4 DONE 1.2s
	
	#5 [stage-1 1/2] FROM docker.io/library/base:5f752032c428c256e52e9ea36859d8a31baf0c90d5f772e2f2208a0109ebc789
	#5 CACHED
	
	#6 [internal] load build context
	#6 transferring context: 430B done
	#6 DONE 0.0s
	
	#7 [builder 1/5] FROM docker.io/library/golang:1.18@sha256:50c889275d26f816b5314fc99f55425fa76b18fcaf16af255f5d57f09e1f48da
	#7 resolve docker.io/library/golang:1.18@sha256:50c889275d26f816b5314fc99f55425fa76b18fcaf16af255f5d57f09e1f48da done
	#7 sha256:56261d0e6b05ece42650b14830960db5b42a9f23479d868256f91d96869ac0c2 0B / 10.88MB 0.1s
	#7 sha256:740324e52de766f230ad7113fac9028399d6e03af34883de625dc2230ef7927e 1.80kB / 1.80kB done
	#7 sha256:c37a56a6d65476eabfb50e74421f16f415093e2d1bdd7f83e8bbb4b1a3eb2109 7.12kB / 7.12kB done
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 0B / 55.03MB 0.1s
	#7 sha256:50c889275d26f816b5314fc99f55425fa76b18fcaf16af255f5d57f09e1f48da 2.36kB / 2.36kB done
	#7 sha256:f049f75f014ee8fec2d4728b203c9cbee0502ce142aec030f874aa28359e25f1 0B / 5.16MB 0.1s
	#7 sha256:56261d0e6b05ece42650b14830960db5b42a9f23479d868256f91d96869ac0c2 10.88MB / 10.88MB 0.2s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 15.73MB / 55.03MB 0.2s
	#7 sha256:f049f75f014ee8fec2d4728b203c9cbee0502ce142aec030f874aa28359e25f1 5.16MB / 5.16MB 0.2s
	#7 sha256:56261d0e6b05ece42650b14830960db5b42a9f23479d868256f91d96869ac0c2 10.88MB / 10.88MB 0.2s done
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 54.53MB / 55.03MB 0.4s
	#7 sha256:f049f75f014ee8fec2d4728b203c9cbee0502ce142aec030f874aa28359e25f1 5.16MB / 5.16MB 0.2s done
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 12.58MB / 54.58MB 0.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 9.44MB / 85.98MB 0.4s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 55.03MB / 55.03MB 0.5s done
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 37.21MB / 54.58MB 0.6s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 27.26MB / 85.98MB 0.6s
	#7 extracting sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 0B / 141.98MB 0.6s
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 49.28MB / 54.58MB 0.7s
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 54.58MB / 54.58MB 0.8s done
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 35.65MB / 85.98MB 0.8s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 14.68MB / 141.98MB 0.8s
	#7 sha256:cc7973a07a5b4a44399c5d36fa142f37bb343bb123a3736357365fd9040ca38a 0B / 156B 0.8s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 47.19MB / 85.98MB 0.9s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 22.02MB / 141.98MB 0.9s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 45.09MB / 141.98MB 1.1s
	#7 sha256:cc7973a07a5b4a44399c5d36fa142f37bb343bb123a3736357365fd9040ca38a 156B / 156B 0.9s done
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 1.2s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 72.35MB / 141.98MB 1.3s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 85.98MB / 141.98MB 1.5s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 122.68MB / 141.98MB 1.7s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 136.31MB / 141.98MB 1.8s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 141.98MB / 141.98MB 2.0s done
	#7 extracting sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 4.3s done
	#7 extracting sha256:f049f75f014ee8fec2d4728b203c9cbee0502ce142aec030f874aa28359e25f1
	#7 extracting sha256:f049f75f014ee8fec2d4728b203c9cbee0502ce142aec030f874aa28359e25f1 0.3s done
	#7 extracting sha256:56261d0e6b05ece42650b14830960db5b42a9f23479d868256f91d96869ac0c2 0.1s
	#7 extracting sha256:56261d0e6b05ece42650b14830960db5b42a9f23479d868256f91d96869ac0c2 0.3s done
	#7 extracting sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 6.0s
	#7 extracting sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 3.5s done
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 11.0s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 16.2s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 21.2s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 26.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 31.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 36.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 41.7s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 46.7s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 51.8s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 56.9s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 62.0s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 67.1s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 72.1s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 77.1s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 82.2s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 87.3s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 92.3s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 97.3s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 102.3s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 107.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 112.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 117.7s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 122.8s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 127.9s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 132.9s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 137.9s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 143.0s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 148.1s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 153.2s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 158.3s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 163.3s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 168.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 173.6s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 178.7s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 183.7s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 188.9s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 193.9s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 199.0s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 204.1s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 56.62MB / 85.98MB 205.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 78.64MB / 85.98MB 205.8s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 85.98MB / 85.98MB 205.9s done
	#7 extracting sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103
	#7 extracting sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 3.5s done
	#7 extracting sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 0.1s
	#7 extracting sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 5.2s
	#7 extracting sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 8.9s done
	#7 extracting sha256:cc7973a07a5b4a44399c5d36fa142f37bb343bb123a3736357365fd9040ca38a
	#7 extracting sha256:cc7973a07a5b4a44399c5d36fa142f37bb343bb123a3736357365fd9040ca38a done
	#7 DONE 218.7s
	
	#8 [builder 2/5] WORKDIR /code
	#8 DONE 0.2s
	
	#9 [builder 3/5] COPY app.go .
	#9 DONE 0.0s
	
	#10 [builder 4/5] COPY go.mod .
	#10 DONE 0.0s
	
	#11 [builder 5/5] RUN go build -gcflags="${SKAFFOLD_GO_GCFLAGS}" -trimpath -o /app .
	#11 DONE 23.1s
	
	#12 [stage-1 2/2] COPY --from=builder /app .
	#12 DONE 0.1s
	
	#13 exporting to image
	#13 exporting layers 0.0s done
	#13 writing image sha256:f7e522b2b727731452a9a87785f86ec8e6fd6c4e65b7756dbd17391185ba1ac7 done
	#13 naming to docker.io/library/leeroy-app:latest done
	#13 DONE 0.0s
	
	What's Next?
	  1. Sign in to your Docker account → docker login
	  2. View a summary of image vulnerabilities and recommendations → docker scout quickview
	Build [leeroy-app] succeeded
	Building [leeroy-web]...
	Target platforms: [linux/amd64]
	#0 building with "default" instance using docker driver
	
	#1 [internal] load build definition from Dockerfile
	#1 transferring dockerfile: 326B done
	#1 DONE 0.0s
	
	#2 [internal] load .dockerignore
	#2 transferring context: 2B done
	#2 DONE 0.0s
	
	#3 [internal] load metadata for docker.io/library/base:5f752032c428c256e52e9ea36859d8a31baf0c90d5f772e2f2208a0109ebc789
	#3 DONE 0.0s
	
	#4 [internal] load metadata for docker.io/library/golang:1.18
	#4 DONE 0.3s
	
	#5 [builder 1/5] FROM docker.io/library/golang:1.18@sha256:50c889275d26f816b5314fc99f55425fa76b18fcaf16af255f5d57f09e1f48da
	#5 DONE 0.0s
	
	#6 [stage-1 1/2] FROM docker.io/library/base:5f752032c428c256e52e9ea36859d8a31baf0c90d5f772e2f2208a0109ebc789
	#6 DONE 0.0s
	
	#7 [builder 2/5] WORKDIR /code
	#7 CACHED
	
	#8 [internal] load build context
	#8 transferring context: 565B done
	#8 DONE 0.0s
	
	#9 [builder 3/5] COPY web.go .
	#9 DONE 0.0s
	
	#10 [builder 4/5] COPY go.mod .
	#10 DONE 0.0s
	
	#11 [builder 5/5] RUN go build -gcflags="${SKAFFOLD_GO_GCFLAGS}" -trimpath -o /app .

                                                
                                                
-- /stdout --
** stderr ** 
	time="2024-02-13T15:33:31-08:00" level=error msg="ERROR: (gcloud.config.config-helper) You do not currently have an active account selected."
	time="2024-02-13T15:33:31-08:00" level=error msg="Please run:"
	time="2024-02-13T15:33:31-08:00" level=error
	time="2024-02-13T15:33:31-08:00" level=error msg="  $ gcloud auth login"
	time="2024-02-13T15:33:31-08:00" level=error
	time="2024-02-13T15:33:31-08:00" level=error msg="to obtain new credentials."
	time="2024-02-13T15:33:31-08:00" level=error
	time="2024-02-13T15:33:31-08:00" level=error msg="If you have already logged in with a different account, run:"
	time="2024-02-13T15:33:31-08:00" level=error
	time="2024-02-13T15:33:31-08:00" level=error msg="  $ gcloud config set account ACCOUNT"
	time="2024-02-13T15:33:31-08:00" level=error
	time="2024-02-13T15:33:31-08:00" level=error msg="to select an already authenticated account to use."

                                                
                                                
** /stderr **
skaffold_test.go:107: error running skaffold: signal: killed

                                                
                                                
-- stdout --
	Generating tags...
	 - leeroy-web -> leeroy-web:latest
	 - leeroy-app -> leeroy-app:latest
	 - base -> base:latest
	Some taggers failed. Rerun with -vdebug for errors.
	Checking cache...
	 - leeroy-web: Not found. Building
	 - leeroy-app: Not found. Building
	 - base: Not found. Building
	Starting build...
	Found [skaffold-768000] context, using local docker daemon.
	Building [base]...
	Target platforms: [linux/amd64]
	#0 building with "default" instance using docker driver
	
	#1 [internal] load .dockerignore
	#1 transferring context: 2B done
	#1 DONE 0.0s
	
	#2 [internal] load build definition from Dockerfile
	#2 transferring dockerfile: 250B done
	#2 DONE 0.0s
	
	#3 [internal] load metadata for gcr.io/distroless/base:latest
	#3 DONE 2.5s
	
	#4 [1/1] FROM gcr.io/distroless/base@sha256:9d4e5680d67c984ac9c957f66405de25634012e2d5d6dc396c4bdd2ba6ae569f
	#4 resolve gcr.io/distroless/base@sha256:9d4e5680d67c984ac9c957f66405de25634012e2d5d6dc396c4bdd2ba6ae569f done
	#4 sha256:6b16ad2aede1c00fe5f9765419c2165fd72902e768db3126ee68d127cae394ea 0B / 103.78kB 0.1s
	#4 sha256:9d4e5680d67c984ac9c957f66405de25634012e2d5d6dc396c4bdd2ba6ae569f 1.51kB / 1.51kB done
	#4 sha256:c8500b45821ad3ad625d1689bbe0fd12ca31d22865fbf19cc2e982f759ae2133 1.60kB / 1.60kB done
	#4 sha256:fe5ca62666f04366c8e7f605aa82997d71320183e99962fa76b3209fdfbb8b58 0B / 21.20kB 0.1s
	#4 sha256:13190661cbc681abf8c1f3546231bb1ff46c88ce4750a2818426c6e493a09163 2.12kB / 2.12kB done
	#4 sha256:be1681d2fb7c6bc072dddd952d4fa0428a3a3c60b53cdde852e30aaa86f7e1ab 0B / 755.29kB 0.1s
	#4 sha256:6b16ad2aede1c00fe5f9765419c2165fd72902e768db3126ee68d127cae394ea 103.78kB / 103.78kB 0.9s done
	#4 extracting sha256:6b16ad2aede1c00fe5f9765419c2165fd72902e768db3126ee68d127cae394ea done
	#4 sha256:fcb6f6d2c9986d9cd6a2ea3cc2936e5fc613e09f1af9042329011e43057f3265 0B / 317B 0.9s
	#4 sha256:fe5ca62666f04366c8e7f605aa82997d71320183e99962fa76b3209fdfbb8b58 21.20kB / 21.20kB 1.1s done
	#4 extracting sha256:fe5ca62666f04366c8e7f605aa82997d71320183e99962fa76b3209fdfbb8b58 done
	#4 sha256:e8c73c638ae9ec5ad70c49df7e484040d889cca6b4a9af056579c3d058ea93f0 0B / 198B 1.2s
	#4 sha256:fcb6f6d2c9986d9cd6a2ea3cc2936e5fc613e09f1af9042329011e43057f3265 317B / 317B 1.3s done
	#4 sha256:1e3d9b7d145208fa8fa3ee1c9612d0adaac7255f1bbc9ddea7e461e0b317805c 0B / 113B 1.4s
	#4 sha256:e8c73c638ae9ec5ad70c49df7e484040d889cca6b4a9af056579c3d058ea93f0 198B / 198B 1.5s done
	#4 sha256:4aa0ea1413d37a58615488592a0b827ea4b2e48fa5a77cf707d0e35f025e613f 0B / 385B 1.6s
	#4 sha256:1e3d9b7d145208fa8fa3ee1c9612d0adaac7255f1bbc9ddea7e461e0b317805c 113B / 113B 1.7s
	#4 extracting sha256:be1681d2fb7c6bc072dddd952d4fa0428a3a3c60b53cdde852e30aaa86f7e1ab
	#4 sha256:be1681d2fb7c6bc072dddd952d4fa0428a3a3c60b53cdde852e30aaa86f7e1ab 755.29kB / 755.29kB 1.8s done
	#4 sha256:1e3d9b7d145208fa8fa3ee1c9612d0adaac7255f1bbc9ddea7e461e0b317805c 113B / 113B 1.7s done
	#4 sha256:5627a970d25e752d971a501ec7e35d0d6fdcd4a3ce9e958715a686853024794a 0B / 130.56kB 1.9s
	#4 sha256:7c881f9ab25e0d86562a123b5fb56aebf8aa0ddd7d48ef602faf8d1e7cf43d8c 0B / 355B 1.9s
	#4 sha256:4aa0ea1413d37a58615488592a0b827ea4b2e48fa5a77cf707d0e35f025e613f 385B / 385B 1.9s done
	#4 sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 0B / 5.85MB 2.0s
	#4 sha256:5627a970d25e752d971a501ec7e35d0d6fdcd4a3ce9e958715a686853024794a 130.56kB / 130.56kB 2.2s done
	#4 sha256:7c881f9ab25e0d86562a123b5fb56aebf8aa0ddd7d48ef602faf8d1e7cf43d8c 355B / 355B 2.1s done
	#4 sha256:ebba9ccde3efe3177f5a74772e6e85446e7cbad9528c1c169e403a1981429d14 0B / 2.06MB 2.3s
	#4 sha256:1933f300df8c747385bc1e9a261b9fc7ec89b0c02b51439a3759344a643a4bb9 0B / 968.57kB 2.3s
	#4 extracting sha256:be1681d2fb7c6bc072dddd952d4fa0428a3a3c60b53cdde852e30aaa86f7e1ab 0.7s done
	#4 extracting sha256:fcb6f6d2c9986d9cd6a2ea3cc2936e5fc613e09f1af9042329011e43057f3265
	#4 sha256:ebba9ccde3efe3177f5a74772e6e85446e7cbad9528c1c169e403a1981429d14 1.05MB / 2.06MB 2.7s
	#4 extracting sha256:fcb6f6d2c9986d9cd6a2ea3cc2936e5fc613e09f1af9042329011e43057f3265 done
	#4 extracting sha256:e8c73c638ae9ec5ad70c49df7e484040d889cca6b4a9af056579c3d058ea93f0 done
	#4 extracting sha256:1e3d9b7d145208fa8fa3ee1c9612d0adaac7255f1bbc9ddea7e461e0b317805c done
	#4 extracting sha256:4aa0ea1413d37a58615488592a0b827ea4b2e48fa5a77cf707d0e35f025e613f done
	#4 extracting sha256:7c881f9ab25e0d86562a123b5fb56aebf8aa0ddd7d48ef602faf8d1e7cf43d8c done
	#4 extracting sha256:5627a970d25e752d971a501ec7e35d0d6fdcd4a3ce9e958715a686853024794a done
	#4 sha256:ebba9ccde3efe3177f5a74772e6e85446e7cbad9528c1c169e403a1981429d14 2.06MB / 2.06MB 2.8s done
	#4 sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 2.10MB / 5.85MB 3.3s
	#4 sha256:1933f300df8c747385bc1e9a261b9fc7ec89b0c02b51439a3759344a643a4bb9 968.57kB / 968.57kB 3.1s done
	#4 sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 4.19MB / 5.85MB 3.5s
	#4 sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 5.85MB / 5.85MB 3.5s done
	#4 extracting sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 0.1s
	#4 extracting sha256:fb86e1ee192b9a5486e4fda23fbf23ed490368278c15227969043f39f0fdd1e3 0.2s done
	#4 extracting sha256:ebba9ccde3efe3177f5a74772e6e85446e7cbad9528c1c169e403a1981429d14 0.0s done
	#4 extracting sha256:1933f300df8c747385bc1e9a261b9fc7ec89b0c02b51439a3759344a643a4bb9 0.0s done
	#4 DONE 4.0s
	
	#5 exporting to image
	#5 exporting layers done
	#5 writing image sha256:5f752032c428c256e52e9ea36859d8a31baf0c90d5f772e2f2208a0109ebc789 done
	#5 naming to docker.io/library/base:latest done
	#5 DONE 0.0s
	
	What's Next?
	  1. Sign in to your Docker account → docker login
	  2. View a summary of image vulnerabilities and recommendations → docker scout quickview
	Build [base] succeeded
	Building [leeroy-app]...
	Target platforms: [linux/amd64]
	#0 building with "default" instance using docker driver
	
	#1 [internal] load .dockerignore
	#1 transferring context: 2B done
	#1 DONE 0.0s
	
	#2 [internal] load build definition from Dockerfile
	#2 transferring dockerfile: 326B done
	#2 DONE 0.0s
	
	#3 [internal] load metadata for docker.io/library/base:5f752032c428c256e52e9ea36859d8a31baf0c90d5f772e2f2208a0109ebc789
	#3 DONE 0.0s
	
	#4 [internal] load metadata for docker.io/library/golang:1.18
	#4 DONE 1.2s
	
	#5 [stage-1 1/2] FROM docker.io/library/base:5f752032c428c256e52e9ea36859d8a31baf0c90d5f772e2f2208a0109ebc789
	#5 CACHED
	
	#6 [internal] load build context
	#6 transferring context: 430B done
	#6 DONE 0.0s
	
	#7 [builder 1/5] FROM docker.io/library/golang:1.18@sha256:50c889275d26f816b5314fc99f55425fa76b18fcaf16af255f5d57f09e1f48da
	#7 resolve docker.io/library/golang:1.18@sha256:50c889275d26f816b5314fc99f55425fa76b18fcaf16af255f5d57f09e1f48da done
	#7 sha256:56261d0e6b05ece42650b14830960db5b42a9f23479d868256f91d96869ac0c2 0B / 10.88MB 0.1s
	#7 sha256:740324e52de766f230ad7113fac9028399d6e03af34883de625dc2230ef7927e 1.80kB / 1.80kB done
	#7 sha256:c37a56a6d65476eabfb50e74421f16f415093e2d1bdd7f83e8bbb4b1a3eb2109 7.12kB / 7.12kB done
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 0B / 55.03MB 0.1s
	#7 sha256:50c889275d26f816b5314fc99f55425fa76b18fcaf16af255f5d57f09e1f48da 2.36kB / 2.36kB done
	#7 sha256:f049f75f014ee8fec2d4728b203c9cbee0502ce142aec030f874aa28359e25f1 0B / 5.16MB 0.1s
	#7 sha256:56261d0e6b05ece42650b14830960db5b42a9f23479d868256f91d96869ac0c2 10.88MB / 10.88MB 0.2s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 15.73MB / 55.03MB 0.2s
	#7 sha256:f049f75f014ee8fec2d4728b203c9cbee0502ce142aec030f874aa28359e25f1 5.16MB / 5.16MB 0.2s
	#7 sha256:56261d0e6b05ece42650b14830960db5b42a9f23479d868256f91d96869ac0c2 10.88MB / 10.88MB 0.2s done
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 54.53MB / 55.03MB 0.4s
	#7 sha256:f049f75f014ee8fec2d4728b203c9cbee0502ce142aec030f874aa28359e25f1 5.16MB / 5.16MB 0.2s done
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 12.58MB / 54.58MB 0.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 9.44MB / 85.98MB 0.4s
	#7 sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 55.03MB / 55.03MB 0.5s done
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 37.21MB / 54.58MB 0.6s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 27.26MB / 85.98MB 0.6s
	#7 extracting sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 0B / 141.98MB 0.6s
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 49.28MB / 54.58MB 0.7s
	#7 sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 54.58MB / 54.58MB 0.8s done
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 35.65MB / 85.98MB 0.8s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 14.68MB / 141.98MB 0.8s
	#7 sha256:cc7973a07a5b4a44399c5d36fa142f37bb343bb123a3736357365fd9040ca38a 0B / 156B 0.8s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 47.19MB / 85.98MB 0.9s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 22.02MB / 141.98MB 0.9s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 45.09MB / 141.98MB 1.1s
	#7 sha256:cc7973a07a5b4a44399c5d36fa142f37bb343bb123a3736357365fd9040ca38a 156B / 156B 0.9s done
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 61.87MB / 141.98MB 1.2s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 72.35MB / 141.98MB 1.3s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 85.98MB / 141.98MB 1.5s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 122.68MB / 141.98MB 1.7s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 136.31MB / 141.98MB 1.8s
	#7 sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 141.98MB / 141.98MB 2.0s done
	#7 extracting sha256:bbeef03cda1f5d6c9e20c310c1c91382a6b0a1a2501c3436b28152f13896f082 4.3s done
	#7 extracting sha256:f049f75f014ee8fec2d4728b203c9cbee0502ce142aec030f874aa28359e25f1
	#7 extracting sha256:f049f75f014ee8fec2d4728b203c9cbee0502ce142aec030f874aa28359e25f1 0.3s done
	#7 extracting sha256:56261d0e6b05ece42650b14830960db5b42a9f23479d868256f91d96869ac0c2 0.1s
	#7 extracting sha256:56261d0e6b05ece42650b14830960db5b42a9f23479d868256f91d96869ac0c2 0.3s done
	#7 extracting sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 6.0s
	#7 extracting sha256:9bd150679dbdb02d9d4df4457d54211d6ee719ca7bc77747a7be4cd99ae03988 3.5s done
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 11.0s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 16.2s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 21.2s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 26.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 31.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 36.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 41.7s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 46.7s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 51.8s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 56.9s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 62.0s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 67.1s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 72.1s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 77.1s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 82.2s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 87.3s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 92.3s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 97.3s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 102.3s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 107.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 112.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 117.7s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 122.8s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 127.9s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 132.9s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 137.9s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 143.0s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 148.1s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 153.2s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 158.3s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 163.3s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 168.4s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 173.6s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 178.7s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 183.7s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 188.9s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 193.9s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 199.0s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 49.28MB / 85.98MB 204.1s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 56.62MB / 85.98MB 205.5s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 78.64MB / 85.98MB 205.8s
	#7 sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 85.98MB / 85.98MB 205.9s done
	#7 extracting sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103
	#7 extracting sha256:bfcb68b5bd105d3f88a2c15354cff6c253bedc41d83c1da28b3d686c37cd9103 3.5s done
	#7 extracting sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 0.1s
	#7 extracting sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 5.2s
	#7 extracting sha256:06d0c5d18ef41fa1c2382bd2afd189a01ebfff4910b868879b6dcfeef46bc003 8.9s done
	#7 extracting sha256:cc7973a07a5b4a44399c5d36fa142f37bb343bb123a3736357365fd9040ca38a
	#7 extracting sha256:cc7973a07a5b4a44399c5d36fa142f37bb343bb123a3736357365fd9040ca38a done
	#7 DONE 218.7s
	
	#8 [builder 2/5] WORKDIR /code
	#8 DONE 0.2s
	
	#9 [builder 3/5] COPY app.go .
	#9 DONE 0.0s
	
	#10 [builder 4/5] COPY go.mod .
	#10 DONE 0.0s
	
	#11 [builder 5/5] RUN go build -gcflags="${SKAFFOLD_GO_GCFLAGS}" -trimpath -o /app .
	#11 DONE 23.1s
	
	#12 [stage-1 2/2] COPY --from=builder /app .
	#12 DONE 0.1s
	
	#13 exporting to image
	#13 exporting layers 0.0s done
	#13 writing image sha256:f7e522b2b727731452a9a87785f86ec8e6fd6c4e65b7756dbd17391185ba1ac7 done
	#13 naming to docker.io/library/leeroy-app:latest done
	#13 DONE 0.0s
	
	What's Next?
	  1. Sign in to your Docker account → docker login
	  2. View a summary of image vulnerabilities and recommendations → docker scout quickview
	Build [leeroy-app] succeeded
	Building [leeroy-web]...
	Target platforms: [linux/amd64]
	#0 building with "default" instance using docker driver
	
	#1 [internal] load build definition from Dockerfile
	#1 transferring dockerfile: 326B done
	#1 DONE 0.0s
	
	#2 [internal] load .dockerignore
	#2 transferring context: 2B done
	#2 DONE 0.0s
	
	#3 [internal] load metadata for docker.io/library/base:5f752032c428c256e52e9ea36859d8a31baf0c90d5f772e2f2208a0109ebc789
	#3 DONE 0.0s
	
	#4 [internal] load metadata for docker.io/library/golang:1.18
	#4 DONE 0.3s
	
	#5 [builder 1/5] FROM docker.io/library/golang:1.18@sha256:50c889275d26f816b5314fc99f55425fa76b18fcaf16af255f5d57f09e1f48da
	#5 DONE 0.0s
	
	#6 [stage-1 1/2] FROM docker.io/library/base:5f752032c428c256e52e9ea36859d8a31baf0c90d5f772e2f2208a0109ebc789
	#6 DONE 0.0s
	
	#7 [builder 2/5] WORKDIR /code
	#7 CACHED
	
	#8 [internal] load build context
	#8 transferring context: 565B done
	#8 DONE 0.0s
	
	#9 [builder 3/5] COPY web.go .
	#9 DONE 0.0s
	
	#10 [builder 4/5] COPY go.mod .
	#10 DONE 0.0s
	
	#11 [builder 5/5] RUN go build -gcflags="${SKAFFOLD_GO_GCFLAGS}" -trimpath -o /app .

                                                
                                                
-- /stdout --
** stderr ** 
	time="2024-02-13T15:33:31-08:00" level=error msg="ERROR: (gcloud.config.config-helper) You do not currently have an active account selected."
	time="2024-02-13T15:33:31-08:00" level=error msg="Please run:"
	time="2024-02-13T15:33:31-08:00" level=error
	time="2024-02-13T15:33:31-08:00" level=error msg="  $ gcloud auth login"
	time="2024-02-13T15:33:31-08:00" level=error
	time="2024-02-13T15:33:31-08:00" level=error msg="to obtain new credentials."
	time="2024-02-13T15:33:31-08:00" level=error
	time="2024-02-13T15:33:31-08:00" level=error msg="If you have already logged in with a different account, run:"
	time="2024-02-13T15:33:31-08:00" level=error
	time="2024-02-13T15:33:31-08:00" level=error msg="  $ gcloud config set account ACCOUNT"
	time="2024-02-13T15:33:31-08:00" level=error
	time="2024-02-13T15:33:31-08:00" level=error msg="to select an already authenticated account to use."

                                                
                                                
** /stderr **
panic.go:523: *** TestSkaffold FAILED at 2024-02-13 15:38:10.229488 -0800 PST m=+2740.355969818
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-768000
helpers_test.go:235: (dbg) docker inspect skaffold-768000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6f278271979a8b6e6641a351e14b48767430f8cf96ab3f45830080e9120fd0c1",
	        "Created": "2024-02-13T23:33:09.788857722Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 182079,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-13T23:33:10.004582673Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/6f278271979a8b6e6641a351e14b48767430f8cf96ab3f45830080e9120fd0c1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6f278271979a8b6e6641a351e14b48767430f8cf96ab3f45830080e9120fd0c1/hostname",
	        "HostsPath": "/var/lib/docker/containers/6f278271979a8b6e6641a351e14b48767430f8cf96ab3f45830080e9120fd0c1/hosts",
	        "LogPath": "/var/lib/docker/containers/6f278271979a8b6e6641a351e14b48767430f8cf96ab3f45830080e9120fd0c1/6f278271979a8b6e6641a351e14b48767430f8cf96ab3f45830080e9120fd0c1-json.log",
	        "Name": "/skaffold-768000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "skaffold-768000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "skaffold-768000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2726297600,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2726297600,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/001e949c2298f232ebca4ac2dc12bfc535212749f1faddc57e243b1889997df0-init/diff:/var/lib/docker/overlay2/17d01b22a52da825ae58e67decfe3f4c8ae2f6fe80510c1be556e233e058ce7a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/001e949c2298f232ebca4ac2dc12bfc535212749f1faddc57e243b1889997df0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/001e949c2298f232ebca4ac2dc12bfc535212749f1faddc57e243b1889997df0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/001e949c2298f232ebca4ac2dc12bfc535212749f1faddc57e243b1889997df0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "skaffold-768000",
	                "Source": "/var/lib/docker/volumes/skaffold-768000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "skaffold-768000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "skaffold-768000",
	                "name.minikube.sigs.k8s.io": "skaffold-768000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3ce8cf9facd21285ad5686d5456175e3d79288f8ed8ed4b090a5ed35540f5fed",
	            "SandboxKey": "/var/run/docker/netns/3ce8cf9facd2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54233"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54234"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54235"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54236"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54237"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "skaffold-768000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6f278271979a",
	                        "skaffold-768000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "a8348c97d5d840ddcbfac2df78a1c8e5ed77e47db0c33db67f71db59f56cdd9f",
	                    "EndpointID": "a289f4f0d5b50cd3ef717d0368f9a4e9a9e56c71d006d21ea022e35103c42d16",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "skaffold-768000",
	                        "6f278271979a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-768000 -n skaffold-768000
helpers_test.go:244: <<< TestSkaffold FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestSkaffold]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p skaffold-768000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p skaffold-768000 logs -n 25: (2.578001918s)
helpers_test.go:252: TestSkaffold logs: 
-- stdout --
	
	==> Audit <==
	|------------|--------------------------------|-----------------------|----------|---------|---------------------|---------------------|
	|  Command   |              Args              |        Profile        |   User   | Version |     Start Time      |      End Time       |
	|------------|--------------------------------|-----------------------|----------|---------|---------------------|---------------------|
	| start      | -p multinode-727000-m02        | multinode-727000-m02  | jenkins  | v1.32.0 | 13 Feb 24 15:27 PST |                     |
	|            | --driver=docker                |                       |          |         |                     |                     |
	| start      | -p multinode-727000-m03        | multinode-727000-m03  | jenkins  | v1.32.0 | 13 Feb 24 15:27 PST | 13 Feb 24 15:28 PST |
	|            | --driver=docker                |                       |          |         |                     |                     |
	| node       | add -p multinode-727000        | multinode-727000      | jenkins  | v1.32.0 | 13 Feb 24 15:28 PST |                     |
	| delete     | -p multinode-727000-m03        | multinode-727000-m03  | jenkins  | v1.32.0 | 13 Feb 24 15:28 PST | 13 Feb 24 15:28 PST |
	| delete     | -p multinode-727000            | multinode-727000      | jenkins  | v1.32.0 | 13 Feb 24 15:28 PST | 13 Feb 24 15:28 PST |
	| start      | -p test-preload-332000         | test-preload-332000   | jenkins  | v1.32.0 | 13 Feb 24 15:28 PST | 13 Feb 24 15:29 PST |
	|            | --memory=2200                  |                       |          |         |                     |                     |
	|            | --alsologtostderr              |                       |          |         |                     |                     |
	|            | --wait=true --preload=false    |                       |          |         |                     |                     |
	|            | --driver=docker                |                       |          |         |                     |                     |
	|            | --kubernetes-version=v1.24.4   |                       |          |         |                     |                     |
	| image      | test-preload-332000 image pull | test-preload-332000   | jenkins  | v1.32.0 | 13 Feb 24 15:29 PST | 13 Feb 24 15:29 PST |
	|            | gcr.io/k8s-minikube/busybox    |                       |          |         |                     |                     |
	| stop       | -p test-preload-332000         | test-preload-332000   | jenkins  | v1.32.0 | 13 Feb 24 15:29 PST | 13 Feb 24 15:30 PST |
	| start      | -p test-preload-332000         | test-preload-332000   | jenkins  | v1.32.0 | 13 Feb 24 15:30 PST | 13 Feb 24 15:31 PST |
	|            | --memory=2200                  |                       |          |         |                     |                     |
	|            | --alsologtostderr -v=1         |                       |          |         |                     |                     |
	|            | --wait=true --driver=docker    |                       |          |         |                     |                     |
	| image      | test-preload-332000 image list | test-preload-332000   | jenkins  | v1.32.0 | 13 Feb 24 15:31 PST | 13 Feb 24 15:31 PST |
	| delete     | -p test-preload-332000         | test-preload-332000   | jenkins  | v1.32.0 | 13 Feb 24 15:31 PST | 13 Feb 24 15:31 PST |
	| start      | -p scheduled-stop-985000       | scheduled-stop-985000 | jenkins  | v1.32.0 | 13 Feb 24 15:31 PST | 13 Feb 24 15:31 PST |
	|            | --memory=2048 --driver=docker  |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-985000       | scheduled-stop-985000 | jenkins  | v1.32.0 | 13 Feb 24 15:31 PST |                     |
	|            | --schedule 5m                  |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-985000       | scheduled-stop-985000 | jenkins  | v1.32.0 | 13 Feb 24 15:31 PST |                     |
	|            | --schedule 5m                  |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-985000       | scheduled-stop-985000 | jenkins  | v1.32.0 | 13 Feb 24 15:31 PST |                     |
	|            | --schedule 5m                  |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-985000       | scheduled-stop-985000 | jenkins  | v1.32.0 | 13 Feb 24 15:31 PST |                     |
	|            | --schedule 15s                 |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-985000       | scheduled-stop-985000 | jenkins  | v1.32.0 | 13 Feb 24 15:31 PST |                     |
	|            | --schedule 15s                 |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-985000       | scheduled-stop-985000 | jenkins  | v1.32.0 | 13 Feb 24 15:31 PST |                     |
	|            | --schedule 15s                 |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-985000       | scheduled-stop-985000 | jenkins  | v1.32.0 | 13 Feb 24 15:31 PST | 13 Feb 24 15:31 PST |
	|            | --cancel-scheduled             |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-985000       | scheduled-stop-985000 | jenkins  | v1.32.0 | 13 Feb 24 15:32 PST |                     |
	|            | --schedule 15s                 |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-985000       | scheduled-stop-985000 | jenkins  | v1.32.0 | 13 Feb 24 15:32 PST |                     |
	|            | --schedule 15s                 |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-985000       | scheduled-stop-985000 | jenkins  | v1.32.0 | 13 Feb 24 15:32 PST | 13 Feb 24 15:32 PST |
	|            | --schedule 15s                 |                       |          |         |                     |                     |
	| delete     | -p scheduled-stop-985000       | scheduled-stop-985000 | jenkins  | v1.32.0 | 13 Feb 24 15:32 PST | 13 Feb 24 15:32 PST |
	| start      | -p skaffold-768000             | skaffold-768000       | jenkins  | v1.32.0 | 13 Feb 24 15:33 PST | 13 Feb 24 15:33 PST |
	|            | --memory=2600 --driver=docker  |                       |          |         |                     |                     |
	| docker-env | --shell none -p                | skaffold-768000       | skaffold | v1.32.0 | 13 Feb 24 15:33 PST | 13 Feb 24 15:33 PST |
	|            | skaffold-768000                |                       |          |         |                     |                     |
	|            | --user=skaffold                |                       |          |         |                     |                     |
	|------------|--------------------------------|-----------------------|----------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 15:33:04
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 15:33:04.941175   13999 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:33:04.941324   13999 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:33:04.941326   13999 out.go:304] Setting ErrFile to fd 2...
	I0213 15:33:04.941329   13999 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:33:04.941520   13999 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18169-6320/.minikube/bin
	I0213 15:33:04.943120   13999 out.go:298] Setting JSON to false
	I0213 15:33:04.966937   13999 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4044,"bootTime":1707863140,"procs":519,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0213 15:33:04.967018   13999 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:33:05.007367   13999 out.go:177] * [skaffold-768000] minikube v1.32.0 on Darwin 14.3.1
	I0213 15:33:05.118168   13999 out.go:177]   - MINIKUBE_LOCATION=18169
	I0213 15:33:05.085491   13999 notify.go:220] Checking for updates...
	I0213 15:33:05.162575   13999 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18169-6320/kubeconfig
	I0213 15:33:05.205268   13999 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0213 15:33:05.248149   13999 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:33:05.291299   13999 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18169-6320/.minikube
	I0213 15:33:05.335313   13999 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:33:05.358604   13999 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:33:05.416594   13999 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0213 15:33:05.416748   13999 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 15:33:05.525648   13999 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:false NGoroutines:110 SystemTime:2024-02-13 23:33:05.516557802 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 15:33:05.571223   13999 out.go:177] * Using the docker driver based on user configuration
	I0213 15:33:05.593311   13999 start.go:298] selected driver: docker
	I0213 15:33:05.593328   13999 start.go:902] validating driver "docker" against <nil>
	I0213 15:33:05.593346   13999 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:33:05.597812   13999 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 15:33:05.705220   13999 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:false NGoroutines:110 SystemTime:2024-02-13 23:33:05.695217001 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 15:33:05.705387   13999 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 15:33:05.705657   13999 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0213 15:33:05.730248   13999 out.go:177] * Using Docker Desktop driver with root privileges
	I0213 15:33:05.751330   13999 cni.go:84] Creating CNI manager for ""
	I0213 15:33:05.751347   13999 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 15:33:05.751362   13999 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0213 15:33:05.751372   13999 start_flags.go:321] config:
	{Name:skaffold-768000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2600 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:skaffold-768000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:33:05.797243   13999 out.go:177] * Starting control plane node skaffold-768000 in cluster skaffold-768000
	I0213 15:33:05.818455   13999 cache.go:121] Beginning downloading kic base image for docker with docker
	I0213 15:33:05.840388   13999 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0213 15:33:05.883336   13999 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 15:33:05.883386   13999 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0213 15:33:05.883401   13999 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0213 15:33:05.883415   13999 cache.go:56] Caching tarball of preloaded images
	I0213 15:33:05.883630   13999 preload.go:174] Found /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0213 15:33:05.883643   13999 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0213 15:33:05.884948   13999 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/skaffold-768000/config.json ...
	I0213 15:33:05.885024   13999 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/skaffold-768000/config.json: {Name:mk406f8243ebba06c58638fa9a7e7ed609c56bec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:33:05.938330   13999 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0213 15:33:05.938343   13999 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0213 15:33:05.938364   13999 cache.go:194] Successfully downloaded all kic artifacts
	I0213 15:33:05.938413   13999 start.go:365] acquiring machines lock for skaffold-768000: {Name:mk774fd0e4990e2a6ff284911872185f4154833d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:33:05.938561   13999 start.go:369] acquired machines lock for "skaffold-768000" in 135.125µs
	I0213 15:33:05.938585   13999 start.go:93] Provisioning new machine with config: &{Name:skaffold-768000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2600 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:skaffold-768000 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnet
Path: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:33:05.938647   13999 start.go:125] createHost starting for "" (driver="docker")
	I0213 15:33:05.960666   13999 out.go:204] * Creating docker container (CPUs=2, Memory=2600MB) ...
	I0213 15:33:05.961044   13999 start.go:159] libmachine.API.Create for "skaffold-768000" (driver="docker")
	I0213 15:33:05.961082   13999 client.go:168] LocalClient.Create starting
	I0213 15:33:05.961244   13999 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem
	I0213 15:33:05.961327   13999 main.go:141] libmachine: Decoding PEM data...
	I0213 15:33:05.961355   13999 main.go:141] libmachine: Parsing certificate...
	I0213 15:33:05.961459   13999 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem
	I0213 15:33:05.961504   13999 main.go:141] libmachine: Decoding PEM data...
	I0213 15:33:05.961512   13999 main.go:141] libmachine: Parsing certificate...
	I0213 15:33:05.962358   13999 cli_runner.go:164] Run: docker network inspect skaffold-768000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0213 15:33:06.016997   13999 cli_runner.go:211] docker network inspect skaffold-768000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0213 15:33:06.017097   13999 network_create.go:281] running [docker network inspect skaffold-768000] to gather additional debugging logs...
	I0213 15:33:06.017112   13999 cli_runner.go:164] Run: docker network inspect skaffold-768000
	W0213 15:33:06.070938   13999 cli_runner.go:211] docker network inspect skaffold-768000 returned with exit code 1
	I0213 15:33:06.070969   13999 network_create.go:284] error running [docker network inspect skaffold-768000]: docker network inspect skaffold-768000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network skaffold-768000 not found
	I0213 15:33:06.070978   13999 network_create.go:286] output of [docker network inspect skaffold-768000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network skaffold-768000 not found
	
	** /stderr **
	I0213 15:33:06.071107   13999 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0213 15:33:06.126436   13999 network.go:210] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0213 15:33:06.126831   13999 network.go:207] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002175790}
	I0213 15:33:06.126849   13999 network_create.go:124] attempt to create docker network skaffold-768000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I0213 15:33:06.126909   13999 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=skaffold-768000 skaffold-768000
	W0213 15:33:06.200903   13999 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=skaffold-768000 skaffold-768000 returned with exit code 1
	W0213 15:33:06.200937   13999 network_create.go:149] failed to create docker network skaffold-768000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=skaffold-768000 skaffold-768000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0213 15:33:06.200958   13999 network_create.go:116] failed to create docker network skaffold-768000 192.168.58.0/24, will retry: subnet is taken
	I0213 15:33:06.202435   13999 network.go:210] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0213 15:33:06.203020   13999 network.go:207] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023a87c0}
	I0213 15:33:06.203029   13999 network_create.go:124] attempt to create docker network skaffold-768000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0213 15:33:06.203152   13999 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=skaffold-768000 skaffold-768000
	I0213 15:33:06.298976   13999 network_create.go:108] docker network skaffold-768000 192.168.67.0/24 created
	I0213 15:33:06.299003   13999 kic.go:121] calculated static IP "192.168.67.2" for the "skaffold-768000" container
	I0213 15:33:06.299105   13999 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0213 15:33:06.351681   13999 cli_runner.go:164] Run: docker volume create skaffold-768000 --label name.minikube.sigs.k8s.io=skaffold-768000 --label created_by.minikube.sigs.k8s.io=true
	I0213 15:33:06.405718   13999 oci.go:103] Successfully created a docker volume skaffold-768000
	I0213 15:33:06.405838   13999 cli_runner.go:164] Run: docker run --rm --name skaffold-768000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=skaffold-768000 --entrypoint /usr/bin/test -v skaffold-768000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0213 15:33:06.815813   13999 oci.go:107] Successfully prepared a docker volume skaffold-768000
	I0213 15:33:06.815845   13999 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 15:33:06.815858   13999 kic.go:194] Starting extracting preloaded images to volume ...
	I0213 15:33:06.815933   13999 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v skaffold-768000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0213 15:33:09.627812   13999 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v skaffold-768000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (2.811856668s)
	I0213 15:33:09.627833   13999 kic.go:203] duration metric: took 2.812031 seconds to extract preloaded images to volume
	I0213 15:33:09.627950   13999 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0213 15:33:09.736005   13999 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname skaffold-768000 --name skaffold-768000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=skaffold-768000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=skaffold-768000 --network skaffold-768000 --ip 192.168.67.2 --volume skaffold-768000:/var --security-opt apparmor=unconfined --memory=2600mb --memory-swap=2600mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0213 15:33:10.013217   13999 cli_runner.go:164] Run: docker container inspect skaffold-768000 --format={{.State.Running}}
	I0213 15:33:10.069998   13999 cli_runner.go:164] Run: docker container inspect skaffold-768000 --format={{.State.Status}}
	I0213 15:33:10.129697   13999 cli_runner.go:164] Run: docker exec skaffold-768000 stat /var/lib/dpkg/alternatives/iptables
	I0213 15:33:10.308044   13999 oci.go:144] the created container "skaffold-768000" has a running status.
	I0213 15:33:10.308108   13999 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/skaffold-768000/id_rsa...
	I0213 15:33:10.566137   13999 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/skaffold-768000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0213 15:33:10.629614   13999 cli_runner.go:164] Run: docker container inspect skaffold-768000 --format={{.State.Status}}
	I0213 15:33:10.685155   13999 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0213 15:33:10.685173   13999 kic_runner.go:114] Args: [docker exec --privileged skaffold-768000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0213 15:33:10.788402   13999 cli_runner.go:164] Run: docker container inspect skaffold-768000 --format={{.State.Status}}
	I0213 15:33:10.840937   13999 machine.go:88] provisioning docker machine ...
	I0213 15:33:10.841015   13999 ubuntu.go:169] provisioning hostname "skaffold-768000"
	I0213 15:33:10.841133   13999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-768000
	I0213 15:33:10.894021   13999 main.go:141] libmachine: Using SSH client type: native
	I0213 15:33:10.894337   13999 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54233 <nil> <nil>}
	I0213 15:33:10.894351   13999 main.go:141] libmachine: About to run SSH command:
	sudo hostname skaffold-768000 && echo "skaffold-768000" | sudo tee /etc/hostname
	I0213 15:33:11.056327   13999 main.go:141] libmachine: SSH cmd err, output: <nil>: skaffold-768000
	
	I0213 15:33:11.056415   13999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-768000
	I0213 15:33:11.109300   13999 main.go:141] libmachine: Using SSH client type: native
	I0213 15:33:11.109595   13999 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54233 <nil> <nil>}
	I0213 15:33:11.109608   13999 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sskaffold-768000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 skaffold-768000/g' /etc/hosts;
				else 
					echo '127.0.1.1 skaffold-768000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 15:33:11.245723   13999 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 15:33:11.245753   13999 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/18169-6320/.minikube CaCertPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18169-6320/.minikube}
	I0213 15:33:11.245776   13999 ubuntu.go:177] setting up certificates
	I0213 15:33:11.245781   13999 provision.go:83] configureAuth start
	I0213 15:33:11.245837   13999 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" skaffold-768000
	I0213 15:33:11.298924   13999 provision.go:138] copyHostCerts
	I0213 15:33:11.299029   13999 exec_runner.go:144] found /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.pem, removing ...
	I0213 15:33:11.299035   13999 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.pem
	I0213 15:33:11.299174   13999 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.pem (1078 bytes)
	I0213 15:33:11.299381   13999 exec_runner.go:144] found /Users/jenkins/minikube-integration/18169-6320/.minikube/cert.pem, removing ...
	I0213 15:33:11.299385   13999 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18169-6320/.minikube/cert.pem
	I0213 15:33:11.299482   13999 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18169-6320/.minikube/cert.pem (1123 bytes)
	I0213 15:33:11.299665   13999 exec_runner.go:144] found /Users/jenkins/minikube-integration/18169-6320/.minikube/key.pem, removing ...
	I0213 15:33:11.299668   13999 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18169-6320/.minikube/key.pem
	I0213 15:33:11.299782   13999 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18169-6320/.minikube/key.pem (1675 bytes)
	I0213 15:33:11.299929   13999 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca-key.pem org=jenkins.skaffold-768000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube skaffold-768000]
	I0213 15:33:11.484716   13999 provision.go:172] copyRemoteCerts
	I0213 15:33:11.484769   13999 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 15:33:11.484817   13999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-768000
	I0213 15:33:11.537916   13999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54233 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/skaffold-768000/id_rsa Username:docker}
	I0213 15:33:11.643079   13999 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 15:33:11.685711   13999 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0213 15:33:11.728276   13999 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0213 15:33:11.771674   13999 provision.go:86] duration metric: configureAuth took 525.889945ms
	I0213 15:33:11.771707   13999 ubuntu.go:193] setting minikube options for container-runtime
	I0213 15:33:11.771831   13999 config.go:182] Loaded profile config "skaffold-768000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:33:11.771891   13999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-768000
	I0213 15:33:11.824110   13999 main.go:141] libmachine: Using SSH client type: native
	I0213 15:33:11.824437   13999 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54233 <nil> <nil>}
	I0213 15:33:11.824448   13999 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0213 15:33:11.962926   13999 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0213 15:33:11.962942   13999 ubuntu.go:71] root file system type: overlay
	I0213 15:33:11.963053   13999 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0213 15:33:11.963129   13999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-768000
	I0213 15:33:12.050568   13999 main.go:141] libmachine: Using SSH client type: native
	I0213 15:33:12.050887   13999 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54233 <nil> <nil>}
	I0213 15:33:12.050933   13999 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0213 15:33:12.214830   13999 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0213 15:33:12.214908   13999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-768000
	I0213 15:33:12.268049   13999 main.go:141] libmachine: Using SSH client type: native
	I0213 15:33:12.268358   13999 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54233 <nil> <nil>}
	I0213 15:33:12.268369   13999 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0213 15:33:13.004967   13999 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:22.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-13 23:33:12.208328941 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0213 15:33:13.004992   13999 machine.go:91] provisioned docker machine in 2.164055287s
	I0213 15:33:13.005008   13999 client.go:171] LocalClient.Create took 7.044059149s
	I0213 15:33:13.005035   13999 start.go:167] duration metric: libmachine.API.Create for "skaffold-768000" took 7.04413048s
	I0213 15:33:13.005050   13999 start.go:300] post-start starting for "skaffold-768000" (driver="docker")
	I0213 15:33:13.005063   13999 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 15:33:13.005157   13999 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 15:33:13.005295   13999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-768000
	I0213 15:33:13.067130   13999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54233 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/skaffold-768000/id_rsa Username:docker}
	I0213 15:33:13.174951   13999 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 15:33:13.179666   13999 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0213 15:33:13.179686   13999 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0213 15:33:13.179692   13999 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0213 15:33:13.179696   13999 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0213 15:33:13.179704   13999 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18169-6320/.minikube/addons for local assets ...
	I0213 15:33:13.179789   13999 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18169-6320/.minikube/files for local assets ...
	I0213 15:33:13.179973   13999 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem -> 67762.pem in /etc/ssl/certs
	I0213 15:33:13.180192   13999 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 15:33:13.197004   13999 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem --> /etc/ssl/certs/67762.pem (1708 bytes)
	I0213 15:33:13.242684   13999 start.go:303] post-start completed in 237.629843ms
	I0213 15:33:13.243423   13999 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" skaffold-768000
	I0213 15:33:13.300299   13999 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/skaffold-768000/config.json ...
	I0213 15:33:13.301044   13999 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0213 15:33:13.301120   13999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-768000
	I0213 15:33:13.360185   13999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54233 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/skaffold-768000/id_rsa Username:docker}
	I0213 15:33:13.456877   13999 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0213 15:33:13.463581   13999 start.go:128] duration metric: createHost completed in 7.525061138s
	I0213 15:33:13.463597   13999 start.go:83] releasing machines lock for "skaffold-768000", held for 7.525178584s
	I0213 15:33:13.463702   13999 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" skaffold-768000
	I0213 15:33:13.519013   13999 ssh_runner.go:195] Run: cat /version.json
	I0213 15:33:13.519027   13999 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 15:33:13.519088   13999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-768000
	I0213 15:33:13.519107   13999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-768000
	I0213 15:33:13.578698   13999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54233 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/skaffold-768000/id_rsa Username:docker}
	I0213 15:33:13.578705   13999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54233 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/skaffold-768000/id_rsa Username:docker}
	I0213 15:33:13.786172   13999 ssh_runner.go:195] Run: systemctl --version
	I0213 15:33:13.791740   13999 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0213 15:33:13.797228   13999 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0213 15:33:13.844868   13999 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0213 15:33:13.844933   13999 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 15:33:13.891660   13999 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0213 15:33:13.891672   13999 start.go:475] detecting cgroup driver to use...
	I0213 15:33:13.891682   13999 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 15:33:13.891832   13999 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 15:33:13.924658   13999 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0213 15:33:13.941802   13999 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0213 15:33:13.960543   13999 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0213 15:33:13.960605   13999 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0213 15:33:13.979458   13999 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 15:33:13.997341   13999 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0213 15:33:14.017422   13999 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 15:33:14.033925   13999 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 15:33:14.052900   13999 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0213 15:33:14.073008   13999 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 15:33:14.089097   13999 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 15:33:14.106422   13999 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 15:33:14.175350   13999 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0213 15:33:14.272052   13999 start.go:475] detecting cgroup driver to use...
	I0213 15:33:14.272068   13999 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 15:33:14.272134   13999 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0213 15:33:14.292557   13999 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0213 15:33:14.292626   13999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0213 15:33:14.316366   13999 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 15:33:14.349922   13999 ssh_runner.go:195] Run: which cri-dockerd
	I0213 15:33:14.356073   13999 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0213 15:33:14.375030   13999 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0213 15:33:14.411748   13999 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0213 15:33:14.521861   13999 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0213 15:33:14.618923   13999 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0213 15:33:14.619042   13999 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0213 15:33:14.648836   13999 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 15:33:14.715729   13999 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 15:33:14.968566   13999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0213 15:33:14.994656   13999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0213 15:33:15.014935   13999 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0213 15:33:15.088221   13999 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0213 15:33:15.156392   13999 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 15:33:15.221975   13999 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0213 15:33:15.251161   13999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0213 15:33:15.271226   13999 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 15:33:15.334974   13999 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0213 15:33:15.425023   13999 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0213 15:33:15.425522   13999 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0213 15:33:15.430323   13999 start.go:543] Will wait 60s for crictl version
	I0213 15:33:15.430375   13999 ssh_runner.go:195] Run: which crictl
	I0213 15:33:15.434549   13999 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 15:33:15.493550   13999 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0213 15:33:15.493621   13999 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 15:33:15.520747   13999 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 15:33:15.592114   13999 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0213 15:33:15.592208   13999 cli_runner.go:164] Run: docker exec -t skaffold-768000 dig +short host.docker.internal
	I0213 15:33:15.712314   13999 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0213 15:33:15.712897   13999 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0213 15:33:15.718000   13999 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 15:33:15.735835   13999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" skaffold-768000
	I0213 15:33:15.790697   13999 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 15:33:15.790781   13999 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 15:33:15.810962   13999 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0213 15:33:15.810975   13999 docker.go:615] Images already preloaded, skipping extraction
	I0213 15:33:15.811080   13999 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 15:33:15.832565   13999 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0213 15:33:15.832587   13999 cache_images.go:84] Images are preloaded, skipping loading
	I0213 15:33:15.832660   13999 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0213 15:33:15.883203   13999 cni.go:84] Creating CNI manager for ""
	I0213 15:33:15.883218   13999 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 15:33:15.883231   13999 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 15:33:15.883246   13999 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:skaffold-768000 NodeName:skaffold-768000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 15:33:15.883358   13999 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "skaffold-768000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 15:33:15.883414   13999 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=skaffold-768000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:skaffold-768000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 15:33:15.883470   13999 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0213 15:33:15.898692   13999 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 15:33:15.898736   13999 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 15:33:15.915978   13999 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0213 15:33:15.945314   13999 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 15:33:15.977313   13999 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0213 15:33:16.008036   13999 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0213 15:33:16.012983   13999 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 15:33:16.031291   13999 certs.go:56] Setting up /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/skaffold-768000 for IP: 192.168.67.2
	I0213 15:33:16.031311   13999 certs.go:190] acquiring lock for shared ca certs: {Name:mkc037f48c69539d66bd92ede4890b05c28518b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:33:16.031517   13999 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.key
	I0213 15:33:16.031592   13999 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18169-6320/.minikube/proxy-client-ca.key
	I0213 15:33:16.031637   13999 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/skaffold-768000/client.key
	I0213 15:33:16.031647   13999 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/skaffold-768000/client.crt with IP's: []
	I0213 15:33:16.192601   13999 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/skaffold-768000/client.crt ...
	I0213 15:33:16.192612   13999 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/skaffold-768000/client.crt: {Name:mk473b23d1b4683d1bcff05905a772cb5a30cc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:33:16.200422   13999 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/skaffold-768000/client.key ...
	I0213 15:33:16.200436   13999 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/skaffold-768000/client.key: {Name:mkfc14a4b296743bb8f79264957b2cdcc8d64af2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:33:16.203182   13999 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/skaffold-768000/apiserver.key.c7fa3a9e
	I0213 15:33:16.203197   13999 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/skaffold-768000/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0213 15:33:16.521862   13999 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/skaffold-768000/apiserver.crt.c7fa3a9e ...
	I0213 15:33:16.521871   13999 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/skaffold-768000/apiserver.crt.c7fa3a9e: {Name:mk3db7156f4075b55654e484bcc410a00a320cd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:33:16.522422   13999 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/skaffold-768000/apiserver.key.c7fa3a9e ...
	I0213 15:33:16.522429   13999 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/skaffold-768000/apiserver.key.c7fa3a9e: {Name:mk5b725c7506904e02c1b7eb9d01d8fe9310f537 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:33:16.522647   13999 certs.go:337] copying /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/skaffold-768000/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/skaffold-768000/apiserver.crt
	I0213 15:33:16.522816   13999 certs.go:341] copying /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/skaffold-768000/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/skaffold-768000/apiserver.key
	I0213 15:33:16.522983   13999 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/skaffold-768000/proxy-client.key
	I0213 15:33:16.522996   13999 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/skaffold-768000/proxy-client.crt with IP's: []
	I0213 15:33:16.880487   13999 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/skaffold-768000/proxy-client.crt ...
	I0213 15:33:16.880504   13999 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/skaffold-768000/proxy-client.crt: {Name:mkdd6e4c2fcee0d555e8dc5a6fe1d138e2bbe8b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:33:16.880810   13999 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/skaffold-768000/proxy-client.key ...
	I0213 15:33:16.880819   13999 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/skaffold-768000/proxy-client.key: {Name:mkf295beb2c4d278a519d05f56a4378a5f3f76d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:33:16.881219   13999 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/6776.pem (1338 bytes)
	W0213 15:33:16.881271   13999 certs.go:433] ignoring /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/6776_empty.pem, impossibly tiny 0 bytes
	I0213 15:33:16.881280   13999 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 15:33:16.881319   13999 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem (1078 bytes)
	I0213 15:33:16.881360   13999 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem (1123 bytes)
	I0213 15:33:16.881395   13999 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/key.pem (1675 bytes)
	I0213 15:33:16.882018   13999 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem (1708 bytes)
	I0213 15:33:16.882565   13999 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/skaffold-768000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 15:33:16.926118   13999 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/skaffold-768000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0213 15:33:16.968125   13999 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/skaffold-768000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 15:33:17.011025   13999 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/skaffold-768000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 15:33:17.052608   13999 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 15:33:17.094454   13999 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0213 15:33:17.137554   13999 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 15:33:17.180580   13999 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 15:33:17.223825   13999 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem --> /usr/share/ca-certificates/67762.pem (1708 bytes)
	I0213 15:33:17.267050   13999 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 15:33:17.308814   13999 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/6776.pem --> /usr/share/ca-certificates/6776.pem (1338 bytes)
	I0213 15:33:17.349859   13999 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 15:33:17.380388   13999 ssh_runner.go:195] Run: openssl version
	I0213 15:33:17.386113   13999 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67762.pem && ln -fs /usr/share/ca-certificates/67762.pem /etc/ssl/certs/67762.pem"
	I0213 15:33:17.402759   13999 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67762.pem
	I0213 15:33:17.407185   13999 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 23:02 /usr/share/ca-certificates/67762.pem
	I0213 15:33:17.407241   13999 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67762.pem
	I0213 15:33:17.414196   13999 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67762.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 15:33:17.430390   13999 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 15:33:17.446661   13999 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 15:33:17.451045   13999 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 22:54 /usr/share/ca-certificates/minikubeCA.pem
	I0213 15:33:17.451110   13999 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 15:33:17.458018   13999 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 15:33:17.475558   13999 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6776.pem && ln -fs /usr/share/ca-certificates/6776.pem /etc/ssl/certs/6776.pem"
	I0213 15:33:17.492042   13999 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6776.pem
	I0213 15:33:17.496475   13999 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 23:02 /usr/share/ca-certificates/6776.pem
	I0213 15:33:17.496520   13999 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6776.pem
	I0213 15:33:17.503219   13999 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6776.pem /etc/ssl/certs/51391683.0"
	I0213 15:33:17.520688   13999 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 15:33:17.525116   13999 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0213 15:33:17.525157   13999 kubeadm.go:404] StartCluster: {Name:skaffold-768000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2600 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:skaffold-768000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:33:17.525248   13999 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 15:33:17.543379   13999 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 15:33:17.559140   13999 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 15:33:17.574885   13999 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0213 15:33:17.574982   13999 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 15:33:17.590538   13999 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 15:33:17.590567   13999 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0213 15:33:17.643554   13999 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0213 15:33:17.643641   13999 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 15:33:17.768766   13999 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 15:33:17.768901   13999 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 15:33:17.769008   13999 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 15:33:18.060626   13999 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 15:33:18.117892   13999 out.go:204]   - Generating certificates and keys ...
	I0213 15:33:18.117959   13999 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 15:33:18.118014   13999 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 15:33:18.217459   13999 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0213 15:33:18.297750   13999 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0213 15:33:18.375501   13999 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0213 15:33:18.543043   13999 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0213 15:33:18.606912   13999 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0213 15:33:18.607029   13999 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost skaffold-768000] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0213 15:33:18.877727   13999 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0213 15:33:18.877831   13999 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost skaffold-768000] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0213 15:33:18.958112   13999 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0213 15:33:19.179773   13999 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0213 15:33:19.286218   13999 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0213 15:33:19.286283   13999 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 15:33:19.431693   13999 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 15:33:19.595412   13999 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 15:33:19.694877   13999 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 15:33:19.833147   13999 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 15:33:19.833486   13999 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 15:33:19.836783   13999 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 15:33:19.897986   13999 out.go:204]   - Booting up control plane ...
	I0213 15:33:19.898096   13999 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 15:33:19.898205   13999 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 15:33:19.898320   13999 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 15:33:19.898494   13999 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 15:33:19.898572   13999 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 15:33:19.898600   13999 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 15:33:19.924421   13999 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 15:33:24.926450   13999 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002375 seconds
	I0213 15:33:24.926616   13999 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 15:33:24.938496   13999 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 15:33:25.464590   13999 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 15:33:25.464739   13999 kubeadm.go:322] [mark-control-plane] Marking the node skaffold-768000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0213 15:33:25.971355   13999 kubeadm.go:322] [bootstrap-token] Using token: fcw15m.vltgblulmwbgupss
	I0213 15:33:26.016922   13999 out.go:204]   - Configuring RBAC rules ...
	I0213 15:33:26.017072   13999 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 15:33:26.017172   13999 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 15:33:26.059267   13999 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 15:33:26.063104   13999 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 15:33:26.066548   13999 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 15:33:26.069159   13999 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 15:33:26.077271   13999 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 15:33:26.228482   13999 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 15:33:26.400218   13999 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 15:33:26.401875   13999 kubeadm.go:322] 
	I0213 15:33:26.401948   13999 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 15:33:26.401954   13999 kubeadm.go:322] 
	I0213 15:33:26.402031   13999 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 15:33:26.402035   13999 kubeadm.go:322] 
	I0213 15:33:26.402054   13999 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 15:33:26.402100   13999 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 15:33:26.402139   13999 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 15:33:26.402142   13999 kubeadm.go:322] 
	I0213 15:33:26.402192   13999 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0213 15:33:26.402196   13999 kubeadm.go:322] 
	I0213 15:33:26.402243   13999 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0213 15:33:26.402250   13999 kubeadm.go:322] 
	I0213 15:33:26.402299   13999 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 15:33:26.402361   13999 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 15:33:26.402458   13999 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 15:33:26.402464   13999 kubeadm.go:322] 
	I0213 15:33:26.402555   13999 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 15:33:26.402644   13999 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 15:33:26.402650   13999 kubeadm.go:322] 
	I0213 15:33:26.402742   13999 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token fcw15m.vltgblulmwbgupss \
	I0213 15:33:26.402890   13999 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ec544454347b5e5d48e23ee1b9aa2810f9f410e5602199cd4da9ee9f3806dac7 \
	I0213 15:33:26.402909   13999 kubeadm.go:322] 	--control-plane 
	I0213 15:33:26.402912   13999 kubeadm.go:322] 
	I0213 15:33:26.403010   13999 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 15:33:26.403019   13999 kubeadm.go:322] 
	I0213 15:33:26.403136   13999 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token fcw15m.vltgblulmwbgupss \
	I0213 15:33:26.403262   13999 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ec544454347b5e5d48e23ee1b9aa2810f9f410e5602199cd4da9ee9f3806dac7 
	I0213 15:33:26.411850   13999 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0213 15:33:26.412017   13999 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 15:33:26.412037   13999 cni.go:84] Creating CNI manager for ""
	I0213 15:33:26.412057   13999 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 15:33:26.434462   13999 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 15:33:26.455886   13999 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 15:33:26.517125   13999 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 15:33:26.621788   13999 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 15:33:26.621856   13999 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 15:33:26.621861   13999 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=90664111bc55fed26ce3e984eae935c06b114802 minikube.k8s.io/name=skaffold-768000 minikube.k8s.io/updated_at=2024_02_13T15_33_26_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 15:33:26.632339   13999 ops.go:34] apiserver oom_adj: -16
	I0213 15:33:26.823675   13999 kubeadm.go:1088] duration metric: took 201.878956ms to wait for elevateKubeSystemPrivileges.
	I0213 15:33:26.823686   13999 kubeadm.go:406] StartCluster complete in 9.298717997s
	I0213 15:33:26.823700   13999 settings.go:142] acquiring lock: {Name:mk73e2877e5f833d3067188c2d2115030ace2af4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:33:26.823785   13999 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18169-6320/kubeconfig
	I0213 15:33:26.824327   13999 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/kubeconfig: {Name:mk44cd4b9e88d1002bf6fa3af05bfaa649323b25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:33:26.824972   13999 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 15:33:26.825013   13999 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 15:33:26.825056   13999 addons.go:69] Setting storage-provisioner=true in profile "skaffold-768000"
	I0213 15:33:26.825089   13999 addons.go:234] Setting addon storage-provisioner=true in "skaffold-768000"
	I0213 15:33:26.825094   13999 addons.go:69] Setting default-storageclass=true in profile "skaffold-768000"
	I0213 15:33:26.825126   13999 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "skaffold-768000"
	I0213 15:33:26.825137   13999 host.go:66] Checking if "skaffold-768000" exists ...
	I0213 15:33:26.825169   13999 config.go:182] Loaded profile config "skaffold-768000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:33:26.825402   13999 cli_runner.go:164] Run: docker container inspect skaffold-768000 --format={{.State.Status}}
	I0213 15:33:26.825502   13999 cli_runner.go:164] Run: docker container inspect skaffold-768000 --format={{.State.Status}}
	I0213 15:33:26.933912   13999 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 15:33:26.893216   13999 addons.go:234] Setting addon default-storageclass=true in "skaffold-768000"
	I0213 15:33:26.970817   13999 host.go:66] Checking if "skaffold-768000" exists ...
	I0213 15:33:26.970926   13999 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 15:33:26.970936   13999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 15:33:26.971030   13999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-768000
	I0213 15:33:26.972376   13999 cli_runner.go:164] Run: docker container inspect skaffold-768000 --format={{.State.Status}}
	I0213 15:33:26.980812   13999 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0213 15:33:27.041070   13999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54233 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/skaffold-768000/id_rsa Username:docker}
	I0213 15:33:27.041184   13999 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 15:33:27.041195   13999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 15:33:27.041318   13999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-768000
	I0213 15:33:27.102950   13999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54233 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/skaffold-768000/id_rsa Username:docker}
	I0213 15:33:27.235294   13999 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 15:33:27.238682   13999 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 15:33:27.334728   13999 kapi.go:248] "coredns" deployment in "kube-system" namespace and "skaffold-768000" context rescaled to 1 replicas
	I0213 15:33:27.334747   13999 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:33:27.359890   13999 out.go:177] * Verifying Kubernetes components...
	I0213 15:33:27.417748   13999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 15:33:28.019672   13999 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.038857925s)
	I0213 15:33:28.019701   13999 start.go:929] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I0213 15:33:28.141359   13999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" skaffold-768000
	I0213 15:33:28.172582   13999 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0213 15:33:28.196401   13999 addons.go:505] enable addons completed in 1.371423333s: enabled=[storage-provisioner default-storageclass]
	I0213 15:33:28.205598   13999 api_server.go:52] waiting for apiserver process to appear ...
	I0213 15:33:28.205649   13999 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 15:33:28.223729   13999 api_server.go:72] duration metric: took 888.981634ms to wait for apiserver process to appear ...
	I0213 15:33:28.223737   13999 api_server.go:88] waiting for apiserver healthz status ...
	I0213 15:33:28.223752   13999 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:54237/healthz ...
	I0213 15:33:28.230191   13999 api_server.go:279] https://127.0.0.1:54237/healthz returned 200:
	ok
	I0213 15:33:28.231575   13999 api_server.go:141] control plane version: v1.28.4
	I0213 15:33:28.231585   13999 api_server.go:131] duration metric: took 7.845103ms to wait for apiserver health ...
	I0213 15:33:28.231592   13999 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 15:33:28.236522   13999 system_pods.go:59] 5 kube-system pods found
	I0213 15:33:28.236536   13999 system_pods.go:61] "etcd-skaffold-768000" [7ec3a1cc-f6e6-45e6-aaeb-45e2e106244c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 15:33:28.236540   13999 system_pods.go:61] "kube-apiserver-skaffold-768000" [fa2cdc61-f5b2-4246-83ff-0167efa6de02] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 15:33:28.236546   13999 system_pods.go:61] "kube-controller-manager-skaffold-768000" [08e641b4-8184-4662-ab5b-950097c9058d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 15:33:28.236549   13999 system_pods.go:61] "kube-scheduler-skaffold-768000" [3a4feae2-f4fc-460b-815f-58ca71cdd0be] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 15:33:28.236555   13999 system_pods.go:61] "storage-provisioner" [5363e6b7-139f-4bfa-84ba-312b4a61966c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0213 15:33:28.236558   13999 system_pods.go:74] duration metric: took 4.963555ms to wait for pod list to return data ...
	I0213 15:33:28.236562   13999 kubeadm.go:581] duration metric: took 901.818264ms to wait for : map[apiserver:true system_pods:true] ...
	I0213 15:33:28.236568   13999 node_conditions.go:102] verifying NodePressure condition ...
	I0213 15:33:28.239090   13999 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0213 15:33:28.239099   13999 node_conditions.go:123] node cpu capacity is 12
	I0213 15:33:28.239109   13999 node_conditions.go:105] duration metric: took 2.539836ms to run NodePressure ...
	I0213 15:33:28.239115   13999 start.go:228] waiting for startup goroutines ...
	I0213 15:33:28.239118   13999 start.go:233] waiting for cluster config update ...
	I0213 15:33:28.239126   13999 start.go:242] writing updated cluster config ...
	I0213 15:33:28.239454   13999 ssh_runner.go:195] Run: rm -f paused
	I0213 15:33:28.285812   13999 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0213 15:33:28.307907   13999 out.go:177] * Done! kubectl is now configured to use "skaffold-768000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 13 23:33:14 skaffold-768000 systemd[1]: Started Docker Application Container Engine.
	Feb 13 23:33:15 skaffold-768000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Feb 13 23:33:15 skaffold-768000 cri-dockerd[1270]: time="2024-02-13T23:33:15Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Feb 13 23:33:15 skaffold-768000 cri-dockerd[1270]: time="2024-02-13T23:33:15Z" level=info msg="Start docker client with request timeout 0s"
	Feb 13 23:33:15 skaffold-768000 cri-dockerd[1270]: time="2024-02-13T23:33:15Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Feb 13 23:33:15 skaffold-768000 cri-dockerd[1270]: time="2024-02-13T23:33:15Z" level=info msg="Loaded network plugin cni"
	Feb 13 23:33:15 skaffold-768000 cri-dockerd[1270]: time="2024-02-13T23:33:15Z" level=info msg="Docker cri networking managed by network plugin cni"
	Feb 13 23:33:15 skaffold-768000 cri-dockerd[1270]: time="2024-02-13T23:33:15Z" level=info msg="Docker Info: &{ID:1fd96840-31bb-414c-9d1d-1bcb29c27d6b Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:24 OomKillDisable:false NGoroutines:35 SystemTime:2024-02-13T23:33:15.412759317Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 KernelVersion:6.6.12-linuxkit OperatingSystem:Ubun
tu 22.04.3 LTS OSVersion:22.04 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc00014ecb0 NCPU:12 MemTotal:6213292032 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy:control-plane.minikube.internal Name:skaffold-768000 Labels:[provider=docker] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builtin name=cgroupns] ProductLicense: De
faultAddressPools:[] Warnings:[]}"
	Feb 13 23:33:15 skaffold-768000 cri-dockerd[1270]: time="2024-02-13T23:33:15Z" level=info msg="Setting cgroupDriver cgroupfs"
	Feb 13 23:33:15 skaffold-768000 cri-dockerd[1270]: time="2024-02-13T23:33:15Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Feb 13 23:33:15 skaffold-768000 cri-dockerd[1270]: time="2024-02-13T23:33:15Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Feb 13 23:33:15 skaffold-768000 cri-dockerd[1270]: time="2024-02-13T23:33:15Z" level=info msg="Start cri-dockerd grpc backend"
	Feb 13 23:33:15 skaffold-768000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Feb 13 23:33:21 skaffold-768000 cri-dockerd[1270]: time="2024-02-13T23:33:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7e6f82ed6968258d1c0b372b11dd43f2918041e930a6fdcb429c0d482fab5ef8/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 13 23:33:21 skaffold-768000 cri-dockerd[1270]: time="2024-02-13T23:33:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bc79bddbb2b53880d888bf73433526485910636f77754edc13c50d209ea64376/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 13 23:33:21 skaffold-768000 cri-dockerd[1270]: time="2024-02-13T23:33:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e201f3d65f3b36c3b207bbc18432ff82f67d80f9eee62afc2cad5b2e550ab19d/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 13 23:33:21 skaffold-768000 cri-dockerd[1270]: time="2024-02-13T23:33:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5558c172e266f0aa378bf28725ae7b9d44880a1cfea1a3e647b2b392e7fbecd2/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 13 23:33:38 skaffold-768000 cri-dockerd[1270]: time="2024-02-13T23:33:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/160d4bd47200583c823a151dcde827d8432a44571e30dab55eaa1c79a7f8875d/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 13 23:33:39 skaffold-768000 cri-dockerd[1270]: time="2024-02-13T23:33:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ce76300f10f5f04a5b8b92d211a6f4774ff2920194d2ddd57e81333a857e8d7d/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 13 23:33:39 skaffold-768000 cri-dockerd[1270]: time="2024-02-13T23:33:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a196a0bbfc9f9d8b2777f4911344c30ee2e53553cbae09799078da359dbe87a0/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 13 23:33:42 skaffold-768000 dockerd[1056]: time="2024-02-13T23:33:42.056587171Z" level=warning msg="no trace recorder found, skipping"
	Feb 13 23:33:47 skaffold-768000 cri-dockerd[1270]: time="2024-02-13T23:33:47Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Feb 13 23:34:08 skaffold-768000 dockerd[1056]: time="2024-02-13T23:34:08.848529809Z" level=info msg="ignoring event" container=30863c434babb6d44866d7794272eb26f92cf72333065fda38893197901969d7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 13 23:37:46 skaffold-768000 dockerd[1056]: time="2024-02-13T23:37:46.303622114Z" level=warning msg="no trace recorder found, skipping"
	Feb 13 23:38:10 skaffold-768000 dockerd[1056]: time="2024-02-13T23:38:10.607167583Z" level=warning msg="no trace recorder found, skipping"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ece5ce246391d       6e38f40d628db       4 minutes ago       Running             storage-provisioner       1                   160d4bd472005       storage-provisioner
	3ae9935a090ca       ead0a4a53df89       4 minutes ago       Running             coredns                   0                   a196a0bbfc9f9       coredns-5dd5756b68-xzqgv
	454cf5a6fbb48       83f6cc407eed8       4 minutes ago       Running             kube-proxy                0                   ce76300f10f5f       kube-proxy-fgv4r
	30863c434babb       6e38f40d628db       4 minutes ago       Exited              storage-provisioner       0                   160d4bd472005       storage-provisioner
	c537bfb3c8260       73deb9a3f7025       4 minutes ago       Running             etcd                      0                   5558c172e266f       etcd-skaffold-768000
	72d8d0db9562c       d058aa5ab969c       4 minutes ago       Running             kube-controller-manager   0                   e201f3d65f3b3       kube-controller-manager-skaffold-768000
	b761c4cced347       7fe0e6f37db33       4 minutes ago       Running             kube-apiserver            0                   bc79bddbb2b53       kube-apiserver-skaffold-768000
	e816c22c1ec07       e3db313c6dbc0       4 minutes ago       Running             kube-scheduler            0                   7e6f82ed69682       kube-scheduler-skaffold-768000
	
	
	==> coredns [3ae9935a090c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52550 - 63598 "HINFO IN 4319041340285233393.9175498716311135636. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008193021s
	
	
	==> describe nodes <==
	Name:               skaffold-768000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=skaffold-768000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90664111bc55fed26ce3e984eae935c06b114802
	                    minikube.k8s.io/name=skaffold-768000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_13T15_33_26_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 13 Feb 2024 23:33:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  skaffold-768000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 13 Feb 2024 23:38:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 13 Feb 2024 23:38:02 +0000   Tue, 13 Feb 2024 23:33:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 13 Feb 2024 23:38:02 +0000   Tue, 13 Feb 2024 23:33:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 13 Feb 2024 23:38:02 +0000   Tue, 13 Feb 2024 23:33:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 13 Feb 2024 23:38:02 +0000   Tue, 13 Feb 2024 23:33:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    skaffold-768000
	Capacity:
	  cpu:                12
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6067668Ki
	  pods:               110
	Allocatable:
	  cpu:                12
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6067668Ki
	  pods:               110
	System Info:
	  Machine ID:                 08d306c3c4b84a278613471a27f0070b
	  System UUID:                08d306c3c4b84a278613471a27f0070b
	  Boot ID:                    eafff5ab-67ad-478e-9471-32de0553af9c
	  Kernel Version:             6.6.12-linuxkit
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-xzqgv                   100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     4m33s
	  kube-system                 etcd-skaffold-768000                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         4m46s
	  kube-system                 kube-apiserver-skaffold-768000             250m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	  kube-system                 kube-controller-manager-skaffold-768000    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	  kube-system                 kube-proxy-fgv4r                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 kube-scheduler-skaffold-768000             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (6%!)(MISSING)   0 (0%!)(MISSING)
	  memory             170Mi (2%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m32s                  kube-proxy       
	  Normal  Starting                 4m52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m52s (x8 over 4m52s)  kubelet          Node skaffold-768000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m52s (x8 over 4m52s)  kubelet          Node skaffold-768000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m52s (x7 over 4m52s)  kubelet          Node skaffold-768000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m46s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m46s                  kubelet          Node skaffold-768000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m46s                  kubelet          Node skaffold-768000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m46s                  kubelet          Node skaffold-768000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m34s                  node-controller  Node skaffold-768000 event: Registered Node skaffold-768000 in Controller
	
	
	==> dmesg <==
	[  +0.000002] virtio-pci 0000:00:07.0: PCI INT A: no GSI
	[  +0.003135] virtio-pci 0000:00:08.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:08.0: PCI INT A: no GSI
	[  +0.002771] virtio-pci 0000:00:09.0: can't derive routing for PCI INT A
	[  +0.000002] virtio-pci 0000:00:09.0: PCI INT A: no GSI
	[  +0.004828] virtio-pci 0000:00:0a.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:0a.0: PCI INT A: no GSI
	[  +0.002001] virtio-pci 0000:00:0b.0: can't derive routing for PCI INT A
	[  +0.000000] virtio-pci 0000:00:0b.0: PCI INT A: no GSI
	[  +0.004812] virtio-pci 0000:00:0c.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:0c.0: PCI INT A: no GSI
	[  +0.005173] virtio-pci 0000:00:0d.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:0d.0: PCI INT A: no GSI
	[  +0.004853] virtio-pci 0000:00:0e.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:0e.0: PCI INT A: no GSI
	[  +0.002551] virtio-pci 0000:00:0f.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:0f.0: PCI INT A: no GSI
	[  +0.002736] virtio-pci 0000:00:10.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:10.0: PCI INT A: no GSI
	[  +0.007628] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds).
	[  +0.023892] lpc_ich 0000:00:1f.0: No MFD cells added
	[  +0.198313] netlink: 'init': attribute type 4 has an invalid length.
	[  +0.025884] fakeowner: loading out-of-tree module taints kernel.
	[  +0.009868] netlink: 'init': attribute type 22 has an invalid length.
	[Feb13 22:54] systemd[1438]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	
	
	==> etcd [c537bfb3c826] <==
	{"level":"info","ts":"2024-02-13T23:33:21.496935Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"8688e899f7831fc7","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-02-13T23:33:21.49716Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-13T23:33:21.497919Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-13T23:33:21.497936Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-13T23:33:21.500563Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2024-02-13T23:33:21.50072Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2024-02-13T23:33:22.120828Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2024-02-13T23:33:22.120911Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-02-13T23:33:22.120925Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2024-02-13T23:33:22.120935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2024-02-13T23:33:22.12094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-02-13T23:33:22.120946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2024-02-13T23:33:22.120952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-02-13T23:33:22.121766Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:33:22.122368Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:skaffold-768000 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-13T23:33:22.12238Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-13T23:33:22.12247Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:33:22.122469Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-13T23:33:22.122687Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:33:22.122791Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:33:22.123161Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-13T23:33:22.123204Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-13T23:33:22.123814Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-13T23:33:22.12384Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2024-02-13T23:37:14.911737Z","caller":"traceutil/trace.go:171","msg":"trace[1805885070] transaction","detail":"{read_only:false; response_revision:565; number_of_response:1; }","duration":"276.898364ms","start":"2024-02-13T23:37:14.634825Z","end":"2024-02-13T23:37:14.911723Z","steps":["trace[1805885070] 'process raft request'  (duration: 276.835352ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:38:12 up  1:01,  0 users,  load average: 3.67, 4.04, 3.92
	Linux skaffold-768000 6.6.12-linuxkit #1 SMP PREEMPT_DYNAMIC Tue Jan 30 09:48:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kube-apiserver [b761c4cced34] <==
	I0213 23:33:23.395815       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0213 23:33:23.395944       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0213 23:33:23.396011       1 aggregator.go:166] initial CRD sync complete...
	I0213 23:33:23.396019       1 autoregister_controller.go:141] Starting autoregister controller
	I0213 23:33:23.396026       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0213 23:33:23.396034       1 cache.go:39] Caches are synced for autoregister controller
	I0213 23:33:23.401469       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0213 23:33:23.401708       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0213 23:33:23.403205       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0213 23:33:23.403315       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0213 23:33:24.205597       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0213 23:33:24.208599       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0213 23:33:24.208610       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0213 23:33:24.509304       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0213 23:33:24.539085       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0213 23:33:24.616353       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0213 23:33:24.621024       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0213 23:33:24.622097       1 controller.go:624] quota admission added evaluator for: endpoints
	I0213 23:33:24.626612       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0213 23:33:25.311630       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0213 23:33:26.215753       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0213 23:33:26.227210       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0213 23:33:26.234350       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0213 23:33:38.958516       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0213 23:33:39.061710       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [72d8d0db9562] <==
	I0213 23:33:38.339888       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0213 23:33:38.339925       1 event.go:307] "Event occurred" object="skaffold-768000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node skaffold-768000 event: Registered Node skaffold-768000 in Controller"
	I0213 23:33:38.339948       1 taint_manager.go:210] "Sending events to api server"
	I0213 23:33:38.344725       1 shared_informer.go:318] Caches are synced for persistent volume
	I0213 23:33:38.345733       1 shared_informer.go:318] Caches are synced for stateful set
	I0213 23:33:38.350545       1 shared_informer.go:318] Caches are synced for attach detach
	I0213 23:33:38.355692       1 shared_informer.go:318] Caches are synced for cronjob
	I0213 23:33:38.355789       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0213 23:33:38.361370       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0213 23:33:38.361420       1 shared_informer.go:318] Caches are synced for disruption
	I0213 23:33:38.411173       1 shared_informer.go:318] Caches are synced for resource quota
	I0213 23:33:38.725980       1 shared_informer.go:318] Caches are synced for garbage collector
	I0213 23:33:38.738081       1 shared_informer.go:318] Caches are synced for garbage collector
	I0213 23:33:38.738150       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0213 23:33:38.966289       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 1"
	I0213 23:33:39.071517       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-fgv4r"
	I0213 23:33:39.212240       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-xzqgv"
	I0213 23:33:39.217222       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="251.527292ms"
	I0213 23:33:39.223475       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.196579ms"
	I0213 23:33:39.223571       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="42.163µs"
	I0213 23:33:39.223596       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.682µs"
	I0213 23:33:39.228914       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="54.232µs"
	I0213 23:33:40.756888       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="54.327µs"
	I0213 23:33:40.771441       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.445063ms"
	I0213 23:33:40.771523       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="31.687µs"
	
	
	==> kube-proxy [454cf5a6fbb4] <==
	I0213 23:33:39.640332       1 server_others.go:69] "Using iptables proxy"
	I0213 23:33:39.697369       1 node.go:141] Successfully retrieved node IP: 192.168.67.2
	I0213 23:33:39.721612       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0213 23:33:39.724082       1 server_others.go:152] "Using iptables Proxier"
	I0213 23:33:39.724135       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0213 23:33:39.724143       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0213 23:33:39.724163       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0213 23:33:39.724475       1 server.go:846] "Version info" version="v1.28.4"
	I0213 23:33:39.724510       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0213 23:33:39.725284       1 config.go:188] "Starting service config controller"
	I0213 23:33:39.725382       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0213 23:33:39.725425       1 config.go:97] "Starting endpoint slice config controller"
	I0213 23:33:39.725429       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0213 23:33:39.728332       1 config.go:315] "Starting node config controller"
	I0213 23:33:39.728401       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0213 23:33:39.825762       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0213 23:33:39.825799       1 shared_informer.go:318] Caches are synced for service config
	I0213 23:33:39.830104       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [e816c22c1ec0] <==
	W0213 23:33:23.402144       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0213 23:33:23.403470       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0213 23:33:23.402290       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0213 23:33:23.403488       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0213 23:33:23.402881       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0213 23:33:23.403531       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0213 23:33:23.404085       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0213 23:33:23.403952       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0213 23:33:23.404237       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0213 23:33:23.404573       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0213 23:33:23.404591       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0213 23:33:23.404637       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0213 23:33:24.215703       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0213 23:33:24.215750       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0213 23:33:24.272337       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0213 23:33:24.272434       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0213 23:33:24.303577       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0213 23:33:24.303638       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0213 23:33:24.359936       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0213 23:33:24.359980       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0213 23:33:24.360716       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0213 23:33:24.360754       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0213 23:33:24.523494       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0213 23:33:24.523540       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0213 23:33:26.215447       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 13 23:33:26 skaffold-768000 kubelet[2420]: I0213 23:33:26.798098    2420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a788ba90937e1bf56db769269fee3cdb-usr-share-ca-certificates\") pod \"kube-controller-manager-skaffold-768000\" (UID: \"a788ba90937e1bf56db769269fee3cdb\") " pod="kube-system/kube-controller-manager-skaffold-768000"
	Feb 13 23:33:27 skaffold-768000 kubelet[2420]: I0213 23:33:27.324900    2420 apiserver.go:52] "Watching apiserver"
	Feb 13 23:33:27 skaffold-768000 kubelet[2420]: I0213 23:33:27.339903    2420 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Feb 13 23:33:27 skaffold-768000 kubelet[2420]: I0213 23:33:27.528126    2420 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-skaffold-768000" podStartSLOduration=2.528092235 podCreationTimestamp="2024-02-13 23:33:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 23:33:27.528057392 +0000 UTC m=+1.337473464" watchObservedRunningTime="2024-02-13 23:33:27.528092235 +0000 UTC m=+1.337508307"
	Feb 13 23:33:27 skaffold-768000 kubelet[2420]: I0213 23:33:27.609780    2420 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-skaffold-768000" podStartSLOduration=1.609645325 podCreationTimestamp="2024-02-13 23:33:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 23:33:27.599753267 +0000 UTC m=+1.409169343" watchObservedRunningTime="2024-02-13 23:33:27.609645325 +0000 UTC m=+1.419061395"
	Feb 13 23:33:27 skaffold-768000 kubelet[2420]: I0213 23:33:27.609943    2420 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-skaffold-768000" podStartSLOduration=1.609927977 podCreationTimestamp="2024-02-13 23:33:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 23:33:27.609630381 +0000 UTC m=+1.419046457" watchObservedRunningTime="2024-02-13 23:33:27.609927977 +0000 UTC m=+1.419344047"
	Feb 13 23:33:27 skaffold-768000 kubelet[2420]: I0213 23:33:27.696799    2420 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-skaffold-768000" podStartSLOduration=2.6967366029999997 podCreationTimestamp="2024-02-13 23:33:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 23:33:27.618963448 +0000 UTC m=+1.428379529" watchObservedRunningTime="2024-02-13 23:33:27.696736603 +0000 UTC m=+1.506152691"
	Feb 13 23:33:38 skaffold-768000 kubelet[2420]: I0213 23:33:38.350803    2420 topology_manager.go:215] "Topology Admit Handler" podUID="5363e6b7-139f-4bfa-84ba-312b4a61966c" podNamespace="kube-system" podName="storage-provisioner"
	Feb 13 23:33:38 skaffold-768000 kubelet[2420]: I0213 23:33:38.417139    2420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5363e6b7-139f-4bfa-84ba-312b4a61966c-tmp\") pod \"storage-provisioner\" (UID: \"5363e6b7-139f-4bfa-84ba-312b4a61966c\") " pod="kube-system/storage-provisioner"
	Feb 13 23:33:38 skaffold-768000 kubelet[2420]: I0213 23:33:38.417219    2420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrkxw\" (UniqueName: \"kubernetes.io/projected/5363e6b7-139f-4bfa-84ba-312b4a61966c-kube-api-access-nrkxw\") pod \"storage-provisioner\" (UID: \"5363e6b7-139f-4bfa-84ba-312b4a61966c\") " pod="kube-system/storage-provisioner"
	Feb 13 23:33:39 skaffold-768000 kubelet[2420]: I0213 23:33:39.075878    2420 topology_manager.go:215] "Topology Admit Handler" podUID="4891576d-84ec-4a44-9422-eff62d46bdaf" podNamespace="kube-system" podName="kube-proxy-fgv4r"
	Feb 13 23:33:39 skaffold-768000 kubelet[2420]: I0213 23:33:39.126237    2420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4891576d-84ec-4a44-9422-eff62d46bdaf-xtables-lock\") pod \"kube-proxy-fgv4r\" (UID: \"4891576d-84ec-4a44-9422-eff62d46bdaf\") " pod="kube-system/kube-proxy-fgv4r"
	Feb 13 23:33:39 skaffold-768000 kubelet[2420]: I0213 23:33:39.126319    2420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwtfw\" (UniqueName: \"kubernetes.io/projected/4891576d-84ec-4a44-9422-eff62d46bdaf-kube-api-access-hwtfw\") pod \"kube-proxy-fgv4r\" (UID: \"4891576d-84ec-4a44-9422-eff62d46bdaf\") " pod="kube-system/kube-proxy-fgv4r"
	Feb 13 23:33:39 skaffold-768000 kubelet[2420]: I0213 23:33:39.126336    2420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4891576d-84ec-4a44-9422-eff62d46bdaf-lib-modules\") pod \"kube-proxy-fgv4r\" (UID: \"4891576d-84ec-4a44-9422-eff62d46bdaf\") " pod="kube-system/kube-proxy-fgv4r"
	Feb 13 23:33:39 skaffold-768000 kubelet[2420]: I0213 23:33:39.126352    2420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4891576d-84ec-4a44-9422-eff62d46bdaf-kube-proxy\") pod \"kube-proxy-fgv4r\" (UID: \"4891576d-84ec-4a44-9422-eff62d46bdaf\") " pod="kube-system/kube-proxy-fgv4r"
	Feb 13 23:33:39 skaffold-768000 kubelet[2420]: I0213 23:33:39.216247    2420 topology_manager.go:215] "Topology Admit Handler" podUID="7297395c-7926-4df2-8166-e8ef1ad983d1" podNamespace="kube-system" podName="coredns-5dd5756b68-xzqgv"
	Feb 13 23:33:39 skaffold-768000 kubelet[2420]: I0213 23:33:39.328932    2420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7297395c-7926-4df2-8166-e8ef1ad983d1-config-volume\") pod \"coredns-5dd5756b68-xzqgv\" (UID: \"7297395c-7926-4df2-8166-e8ef1ad983d1\") " pod="kube-system/coredns-5dd5756b68-xzqgv"
	Feb 13 23:33:39 skaffold-768000 kubelet[2420]: I0213 23:33:39.328990    2420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2zws\" (UniqueName: \"kubernetes.io/projected/7297395c-7926-4df2-8166-e8ef1ad983d1-kube-api-access-l2zws\") pod \"coredns-5dd5756b68-xzqgv\" (UID: \"7297395c-7926-4df2-8166-e8ef1ad983d1\") " pod="kube-system/coredns-5dd5756b68-xzqgv"
	Feb 13 23:33:39 skaffold-768000 kubelet[2420]: I0213 23:33:39.720261    2420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a196a0bbfc9f9d8b2777f4911344c30ee2e53553cbae09799078da359dbe87a0"
	Feb 13 23:33:39 skaffold-768000 kubelet[2420]: I0213 23:33:39.739997    2420 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fgv4r" podStartSLOduration=0.739964036 podCreationTimestamp="2024-02-13 23:33:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 23:33:39.739928849 +0000 UTC m=+13.550030676" watchObservedRunningTime="2024-02-13 23:33:39.739964036 +0000 UTC m=+13.550065875"
	Feb 13 23:33:39 skaffold-768000 kubelet[2420]: I0213 23:33:39.801880    2420 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.801850739 podCreationTimestamp="2024-02-13 23:33:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 23:33:39.801535763 +0000 UTC m=+13.611637587" watchObservedRunningTime="2024-02-13 23:33:39.801850739 +0000 UTC m=+13.611952555"
	Feb 13 23:33:40 skaffold-768000 kubelet[2420]: I0213 23:33:40.757083    2420 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-xzqgv" podStartSLOduration=1.7570519089999999 podCreationTimestamp="2024-02-13 23:33:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 23:33:40.75657131 +0000 UTC m=+14.566673132" watchObservedRunningTime="2024-02-13 23:33:40.757051909 +0000 UTC m=+14.567153731"
	Feb 13 23:33:47 skaffold-768000 kubelet[2420]: I0213 23:33:47.198671    2420 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 13 23:33:47 skaffold-768000 kubelet[2420]: I0213 23:33:47.199360    2420 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 13 23:34:09 skaffold-768000 kubelet[2420]: I0213 23:34:09.925664    2420 scope.go:117] "RemoveContainer" containerID="30863c434babb6d44866d7794272eb26f92cf72333065fda38893197901969d7"
	
	
	==> storage-provisioner [30863c434bab] <==
	I0213 23:33:38.836144       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0213 23:34:08.838778       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ece5ce246391] <==
	I0213 23:34:10.011639       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0213 23:34:10.021257       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0213 23:34:10.021308       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0213 23:34:10.027885       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0213 23:34:10.028044       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_skaffold-768000_97142b61-02df-4d99-9070-50c7bbc4acc1!
	I0213 23:34:10.028153       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e28536e1-8981-4dd3-9531-7fa0c6966f40", APIVersion:"v1", ResourceVersion:"417", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' skaffold-768000_97142b61-02df-4d99-9070-50c7bbc4acc1 became leader
	I0213 23:34:10.129643       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_skaffold-768000_97142b61-02df-4d99-9070-50c7bbc4acc1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p skaffold-768000 -n skaffold-768000
helpers_test.go:261: (dbg) Run:  kubectl --context skaffold-768000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestSkaffold FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "skaffold-768000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-768000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-768000: (2.995588267s)
--- FAIL: TestSkaffold (319.24s)

                                                
                                    
x
+
TestKubernetesUpgrade (332.18s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-108000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-108000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m18.637327187s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-108000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18169
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18169-6320/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18169-6320/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-108000 in cluster kubernetes-upgrade-108000
	* Pulling base image v0.0.42-1704759386-17866 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:43:42.642842   15948 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:43:42.643037   15948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:43:42.643042   15948 out.go:304] Setting ErrFile to fd 2...
	I0213 15:43:42.643046   15948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:43:42.643226   15948 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18169-6320/.minikube/bin
	I0213 15:43:42.644841   15948 out.go:298] Setting JSON to false
	I0213 15:43:42.668285   15948 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4682,"bootTime":1707863140,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0213 15:43:42.668380   15948 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:43:42.690221   15948 out.go:177] * [kubernetes-upgrade-108000] minikube v1.32.0 on Darwin 14.3.1
	I0213 15:43:42.755047   15948 out.go:177]   - MINIKUBE_LOCATION=18169
	I0213 15:43:42.733931   15948 notify.go:220] Checking for updates...
	I0213 15:43:42.796875   15948 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18169-6320/kubeconfig
	I0213 15:43:42.839068   15948 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0213 15:43:42.880873   15948 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:43:42.922701   15948 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18169-6320/.minikube
	I0213 15:43:42.964886   15948 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:43:42.985949   15948 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:43:43.041013   15948 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0213 15:43:43.041169   15948 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 15:43:43.146416   15948 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:110 SystemTime:2024-02-13 23:43:43.135885641 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 15:43:43.189450   15948 out.go:177] * Using the docker driver based on user configuration
	I0213 15:43:43.210704   15948 start.go:298] selected driver: docker
	I0213 15:43:43.210716   15948 start.go:902] validating driver "docker" against <nil>
	I0213 15:43:43.210724   15948 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:43:43.213790   15948 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 15:43:43.321535   15948 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:110 SystemTime:2024-02-13 23:43:43.312119099 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 15:43:43.321704   15948 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 15:43:43.321888   15948 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0213 15:43:43.344637   15948 out.go:177] * Using Docker Desktop driver with root privileges
	I0213 15:43:43.365653   15948 cni.go:84] Creating CNI manager for ""
	I0213 15:43:43.365669   15948 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0213 15:43:43.365679   15948 start_flags.go:321] config:
	{Name:kubernetes-upgrade-108000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-108000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:43:43.407343   15948 out.go:177] * Starting control plane node kubernetes-upgrade-108000 in cluster kubernetes-upgrade-108000
	I0213 15:43:43.428644   15948 cache.go:121] Beginning downloading kic base image for docker with docker
	I0213 15:43:43.449652   15948 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0213 15:43:43.491624   15948 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0213 15:43:43.491649   15948 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0213 15:43:43.491669   15948 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0213 15:43:43.491681   15948 cache.go:56] Caching tarball of preloaded images
	I0213 15:43:43.491790   15948 preload.go:174] Found /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0213 15:43:43.491800   15948 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0213 15:43:43.492655   15948 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000/config.json ...
	I0213 15:43:43.492742   15948 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000/config.json: {Name:mkaf7890acacc52e53e89882d923617a5788fd98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:43:43.542522   15948 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0213 15:43:43.542539   15948 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0213 15:43:43.542568   15948 cache.go:194] Successfully downloaded all kic artifacts
	I0213 15:43:43.542621   15948 start.go:365] acquiring machines lock for kubernetes-upgrade-108000: {Name:mk78f40f51a79f16ef4c36868bc4cb9ae2adaaae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:43:43.542768   15948 start.go:369] acquired machines lock for "kubernetes-upgrade-108000" in 134.871µs
	I0213 15:43:43.542795   15948 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-108000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-108000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:43:43.543045   15948 start.go:125] createHost starting for "" (driver="docker")
	I0213 15:43:43.585422   15948 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0213 15:43:43.585612   15948 start.go:159] libmachine.API.Create for "kubernetes-upgrade-108000" (driver="docker")
	I0213 15:43:43.585637   15948 client.go:168] LocalClient.Create starting
	I0213 15:43:43.585748   15948 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem
	I0213 15:43:43.585803   15948 main.go:141] libmachine: Decoding PEM data...
	I0213 15:43:43.585821   15948 main.go:141] libmachine: Parsing certificate...
	I0213 15:43:43.585871   15948 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem
	I0213 15:43:43.585906   15948 main.go:141] libmachine: Decoding PEM data...
	I0213 15:43:43.585914   15948 main.go:141] libmachine: Parsing certificate...
	I0213 15:43:43.586365   15948 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-108000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0213 15:43:43.635937   15948 cli_runner.go:211] docker network inspect kubernetes-upgrade-108000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0213 15:43:43.636036   15948 network_create.go:281] running [docker network inspect kubernetes-upgrade-108000] to gather additional debugging logs...
	I0213 15:43:43.636063   15948 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-108000
	W0213 15:43:43.686403   15948 cli_runner.go:211] docker network inspect kubernetes-upgrade-108000 returned with exit code 1
	I0213 15:43:43.686434   15948 network_create.go:284] error running [docker network inspect kubernetes-upgrade-108000]: docker network inspect kubernetes-upgrade-108000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-108000 not found
	I0213 15:43:43.686446   15948 network_create.go:286] output of [docker network inspect kubernetes-upgrade-108000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-108000 not found
	
	** /stderr **
	I0213 15:43:43.686608   15948 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0213 15:43:43.738802   15948 network.go:210] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0213 15:43:43.739196   15948 network.go:207] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022d2a60}
	I0213 15:43:43.739211   15948 network_create.go:124] attempt to create docker network kubernetes-upgrade-108000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I0213 15:43:43.739278   15948 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-108000 kubernetes-upgrade-108000
	W0213 15:43:43.792362   15948 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-108000 kubernetes-upgrade-108000 returned with exit code 1
	W0213 15:43:43.792410   15948 network_create.go:149] failed to create docker network kubernetes-upgrade-108000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-108000 kubernetes-upgrade-108000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0213 15:43:43.792428   15948 network_create.go:116] failed to create docker network kubernetes-upgrade-108000 192.168.58.0/24, will retry: subnet is taken
	I0213 15:43:43.793801   15948 network.go:210] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0213 15:43:43.794190   15948 network.go:207] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002278550}
	I0213 15:43:43.794204   15948 network_create.go:124] attempt to create docker network kubernetes-upgrade-108000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0213 15:43:43.794270   15948 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-108000 kubernetes-upgrade-108000
	I0213 15:43:43.887397   15948 network_create.go:108] docker network kubernetes-upgrade-108000 192.168.67.0/24 created
	I0213 15:43:43.887481   15948 kic.go:121] calculated static IP "192.168.67.2" for the "kubernetes-upgrade-108000" container
	I0213 15:43:43.887612   15948 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0213 15:43:43.944445   15948 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-108000 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-108000 --label created_by.minikube.sigs.k8s.io=true
	I0213 15:43:44.005245   15948 oci.go:103] Successfully created a docker volume kubernetes-upgrade-108000
	I0213 15:43:44.005386   15948 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-108000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-108000 --entrypoint /usr/bin/test -v kubernetes-upgrade-108000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0213 15:43:44.512542   15948 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-108000
	I0213 15:43:44.512594   15948 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0213 15:43:44.512608   15948 kic.go:194] Starting extracting preloaded images to volume ...
	I0213 15:43:44.512715   15948 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-108000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0213 15:43:46.977259   15948 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-108000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (2.464497604s)
	I0213 15:43:46.977301   15948 kic.go:203] duration metric: took 2.464741 seconds to extract preloaded images to volume
	I0213 15:43:46.977467   15948 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0213 15:43:47.089653   15948 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-108000 --name kubernetes-upgrade-108000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-108000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-108000 --network kubernetes-upgrade-108000 --ip 192.168.67.2 --volume kubernetes-upgrade-108000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0213 15:43:47.445018   15948 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-108000 --format={{.State.Running}}
	I0213 15:43:47.506515   15948 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-108000 --format={{.State.Status}}
	I0213 15:43:47.567929   15948 cli_runner.go:164] Run: docker exec kubernetes-upgrade-108000 stat /var/lib/dpkg/alternatives/iptables
	I0213 15:43:47.713997   15948 oci.go:144] the created container "kubernetes-upgrade-108000" has a running status.
	I0213 15:43:47.714062   15948 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/kubernetes-upgrade-108000/id_rsa...
	I0213 15:43:47.799445   15948 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/kubernetes-upgrade-108000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0213 15:43:47.875821   15948 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-108000 --format={{.State.Status}}
	I0213 15:43:47.938476   15948 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0213 15:43:47.938501   15948 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-108000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0213 15:43:48.046364   15948 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-108000 --format={{.State.Status}}
	I0213 15:43:48.105464   15948 machine.go:88] provisioning docker machine ...
	I0213 15:43:48.105526   15948 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-108000"
	I0213 15:43:48.105661   15948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-108000
	I0213 15:43:48.167155   15948 main.go:141] libmachine: Using SSH client type: native
	I0213 15:43:48.167522   15948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54728 <nil> <nil>}
	I0213 15:43:48.167535   15948 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-108000 && echo "kubernetes-upgrade-108000" | sudo tee /etc/hostname
	I0213 15:43:48.332187   15948 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-108000
	
	I0213 15:43:48.332283   15948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-108000
	I0213 15:43:48.387606   15948 main.go:141] libmachine: Using SSH client type: native
	I0213 15:43:48.387896   15948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54728 <nil> <nil>}
	I0213 15:43:48.387911   15948 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-108000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-108000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-108000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 15:43:48.527280   15948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 15:43:48.527298   15948 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/18169-6320/.minikube CaCertPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18169-6320/.minikube}
	I0213 15:43:48.527319   15948 ubuntu.go:177] setting up certificates
	I0213 15:43:48.527329   15948 provision.go:83] configureAuth start
	I0213 15:43:48.527423   15948 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-108000
	I0213 15:43:48.583088   15948 provision.go:138] copyHostCerts
	I0213 15:43:48.583191   15948 exec_runner.go:144] found /Users/jenkins/minikube-integration/18169-6320/.minikube/cert.pem, removing ...
	I0213 15:43:48.583204   15948 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18169-6320/.minikube/cert.pem
	I0213 15:43:48.583334   15948 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18169-6320/.minikube/cert.pem (1123 bytes)
	I0213 15:43:48.583560   15948 exec_runner.go:144] found /Users/jenkins/minikube-integration/18169-6320/.minikube/key.pem, removing ...
	I0213 15:43:48.583566   15948 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18169-6320/.minikube/key.pem
	I0213 15:43:48.583652   15948 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18169-6320/.minikube/key.pem (1675 bytes)
	I0213 15:43:48.583821   15948 exec_runner.go:144] found /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.pem, removing ...
	I0213 15:43:48.583827   15948 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.pem
	I0213 15:43:48.583916   15948 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.pem (1078 bytes)
	I0213 15:43:48.584066   15948 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-108000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-108000]
	I0213 15:43:48.790829   15948 provision.go:172] copyRemoteCerts
	I0213 15:43:48.790906   15948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 15:43:48.790970   15948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-108000
	I0213 15:43:48.845606   15948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54728 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/kubernetes-upgrade-108000/id_rsa Username:docker}
	I0213 15:43:48.950433   15948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 15:43:48.993526   15948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0213 15:43:49.037912   15948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0213 15:43:49.080756   15948 provision.go:86] duration metric: configureAuth took 553.421422ms
	I0213 15:43:49.080772   15948 ubuntu.go:193] setting minikube options for container-runtime
	I0213 15:43:49.080924   15948 config.go:182] Loaded profile config "kubernetes-upgrade-108000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0213 15:43:49.080994   15948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-108000
	I0213 15:43:49.161187   15948 main.go:141] libmachine: Using SSH client type: native
	I0213 15:43:49.161496   15948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54728 <nil> <nil>}
	I0213 15:43:49.161511   15948 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0213 15:43:49.302589   15948 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0213 15:43:49.302609   15948 ubuntu.go:71] root file system type: overlay
	I0213 15:43:49.302731   15948 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0213 15:43:49.302822   15948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-108000
	I0213 15:43:49.358092   15948 main.go:141] libmachine: Using SSH client type: native
	I0213 15:43:49.358402   15948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54728 <nil> <nil>}
	I0213 15:43:49.358488   15948 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0213 15:43:49.527715   15948 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0213 15:43:49.527833   15948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-108000
	I0213 15:43:49.585229   15948 main.go:141] libmachine: Using SSH client type: native
	I0213 15:43:49.585540   15948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 54728 <nil> <nil>}
	I0213 15:43:49.585554   15948 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0213 15:43:50.474988   15948 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:22.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-13 23:43:49.521953872 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0213 15:43:50.475016   15948 machine.go:91] provisioned docker machine in 2.369564233s
	I0213 15:43:50.475024   15948 client.go:171] LocalClient.Create took 6.889517137s
	I0213 15:43:50.475039   15948 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-108000" took 6.889564561s
	I0213 15:43:50.475046   15948 start.go:300] post-start starting for "kubernetes-upgrade-108000" (driver="docker")
	I0213 15:43:50.475054   15948 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 15:43:50.475129   15948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 15:43:50.475183   15948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-108000
	I0213 15:43:50.529874   15948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54728 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/kubernetes-upgrade-108000/id_rsa Username:docker}
	I0213 15:43:50.638404   15948 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 15:43:50.642830   15948 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0213 15:43:50.642854   15948 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0213 15:43:50.642862   15948 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0213 15:43:50.642869   15948 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0213 15:43:50.642879   15948 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18169-6320/.minikube/addons for local assets ...
	I0213 15:43:50.642991   15948 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18169-6320/.minikube/files for local assets ...
	I0213 15:43:50.643185   15948 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem -> 67762.pem in /etc/ssl/certs
	I0213 15:43:50.643402   15948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 15:43:50.658487   15948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem --> /etc/ssl/certs/67762.pem (1708 bytes)
	I0213 15:43:50.713225   15948 start.go:303] post-start completed in 238.170022ms
	I0213 15:43:50.713982   15948 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-108000
	I0213 15:43:50.774793   15948 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000/config.json ...
	I0213 15:43:50.775284   15948 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0213 15:43:50.775352   15948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-108000
	I0213 15:43:50.838210   15948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54728 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/kubernetes-upgrade-108000/id_rsa Username:docker}
	I0213 15:43:50.938005   15948 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0213 15:43:50.944364   15948 start.go:128] duration metric: createHost completed in 7.401436781s
	I0213 15:43:50.944396   15948 start.go:83] releasing machines lock for "kubernetes-upgrade-108000", held for 7.401762266s
	I0213 15:43:50.944521   15948 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-108000
	I0213 15:43:51.003532   15948 ssh_runner.go:195] Run: cat /version.json
	I0213 15:43:51.003547   15948 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 15:43:51.003607   15948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-108000
	I0213 15:43:51.003628   15948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-108000
	I0213 15:43:51.066902   15948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54728 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/kubernetes-upgrade-108000/id_rsa Username:docker}
	I0213 15:43:51.066916   15948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54728 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/kubernetes-upgrade-108000/id_rsa Username:docker}
	I0213 15:43:51.333551   15948 ssh_runner.go:195] Run: systemctl --version
	I0213 15:43:51.340965   15948 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0213 15:43:51.346790   15948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0213 15:43:51.394545   15948 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0213 15:43:51.394681   15948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0213 15:43:51.426322   15948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0213 15:43:51.457657   15948 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 15:43:51.457677   15948 start.go:475] detecting cgroup driver to use...
	I0213 15:43:51.457690   15948 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 15:43:51.457812   15948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 15:43:51.490137   15948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0213 15:43:51.508383   15948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0213 15:43:51.526654   15948 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0213 15:43:51.526747   15948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0213 15:43:51.546280   15948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 15:43:51.564344   15948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0213 15:43:51.581613   15948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 15:43:51.598181   15948 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 15:43:51.614630   15948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0213 15:43:51.631664   15948 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 15:43:51.647077   15948 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 15:43:51.662107   15948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 15:43:51.731740   15948 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0213 15:43:51.826595   15948 start.go:475] detecting cgroup driver to use...
	I0213 15:43:51.826619   15948 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 15:43:51.826685   15948 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0213 15:43:51.847711   15948 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0213 15:43:51.847794   15948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0213 15:43:51.868287   15948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 15:43:51.898768   15948 ssh_runner.go:195] Run: which cri-dockerd
	I0213 15:43:51.903526   15948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0213 15:43:51.920544   15948 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0213 15:43:51.954461   15948 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0213 15:43:52.037605   15948 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0213 15:43:52.139063   15948 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0213 15:43:52.139224   15948 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0213 15:43:52.174314   15948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 15:43:52.237900   15948 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 15:43:52.502991   15948 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 15:43:52.526413   15948 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 15:43:52.593557   15948 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	I0213 15:43:52.593649   15948 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-108000 dig +short host.docker.internal
	I0213 15:43:52.730557   15948 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0213 15:43:52.730704   15948 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0213 15:43:52.736151   15948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 15:43:52.754017   15948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-108000
	I0213 15:43:52.811060   15948 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0213 15:43:52.811141   15948 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 15:43:52.833847   15948 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0213 15:43:52.833869   15948 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0213 15:43:52.833939   15948 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0213 15:43:52.850328   15948 ssh_runner.go:195] Run: which lz4
	I0213 15:43:52.855453   15948 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0213 15:43:52.859443   15948 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 15:43:52.859464   15948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0213 15:43:59.519954   15948 docker.go:649] Took 6.664677 seconds to copy over tarball
	I0213 15:43:59.520030   15948 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 15:44:01.177086   15948 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.657068408s)
	I0213 15:44:01.177102   15948 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 15:44:01.229570   15948 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0213 15:44:01.245210   15948 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0213 15:44:01.273885   15948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 15:44:01.336317   15948 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 15:44:01.809175   15948 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 15:44:01.830097   15948 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0213 15:44:01.830112   15948 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0213 15:44:01.830118   15948 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0213 15:44:01.834714   15948 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0213 15:44:01.835081   15948 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 15:44:01.835271   15948 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0213 15:44:01.835278   15948 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0213 15:44:01.835380   15948 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 15:44:01.835411   15948 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0213 15:44:01.835456   15948 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0213 15:44:01.835486   15948 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0213 15:44:01.841498   15948 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0213 15:44:01.841798   15948 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 15:44:01.841877   15948 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0213 15:44:01.842764   15948 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 15:44:01.843176   15948 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0213 15:44:01.843213   15948 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0213 15:44:01.843070   15948 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0213 15:44:01.843430   15948 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0213 15:44:03.762718   15948 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0213 15:44:03.783211   15948 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0213 15:44:03.783248   15948 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0213 15:44:03.783311   15948 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0213 15:44:03.802690   15948 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0213 15:44:03.824862   15948 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0213 15:44:03.844988   15948 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0213 15:44:03.845024   15948 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0213 15:44:03.845091   15948 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0213 15:44:03.853253   15948 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0213 15:44:03.869470   15948 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0213 15:44:03.875768   15948 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0213 15:44:03.875792   15948 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0213 15:44:03.875862   15948 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0213 15:44:03.892454   15948 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0213 15:44:03.894959   15948 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0213 15:44:03.895933   15948 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0213 15:44:03.898546   15948 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0213 15:44:03.909057   15948 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 15:44:03.917112   15948 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0213 15:44:03.917131   15948 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0213 15:44:03.917149   15948 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0213 15:44:03.917150   15948 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0213 15:44:03.917222   15948 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0213 15:44:03.917226   15948 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0213 15:44:03.920972   15948 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0213 15:44:03.920998   15948 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0213 15:44:03.921059   15948 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0213 15:44:03.935216   15948 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0213 15:44:03.935245   15948 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 15:44:03.935323   15948 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 15:44:03.941627   15948 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0213 15:44:03.941657   15948 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0213 15:44:03.947862   15948 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0213 15:44:04.004552   15948 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0213 15:44:04.570121   15948 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 15:44:04.593087   15948 cache_images.go:92] LoadImages completed in 2.763009254s
	W0213 15:44:04.593150   15948 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	I0213 15:44:04.593223   15948 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0213 15:44:04.644452   15948 cni.go:84] Creating CNI manager for ""
	I0213 15:44:04.644470   15948 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0213 15:44:04.644486   15948 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 15:44:04.644502   15948 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-108000 NodeName:kubernetes-upgrade-108000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0213 15:44:04.644596   15948 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-108000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-108000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 15:44:04.644648   15948 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-108000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-108000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 15:44:04.644713   15948 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0213 15:44:04.659866   15948 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 15:44:04.659930   15948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 15:44:04.674755   15948 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I0213 15:44:04.703124   15948 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 15:44:04.732159   15948 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2180 bytes)
	I0213 15:44:04.760343   15948 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0213 15:44:04.764647   15948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 15:44:04.782225   15948 certs.go:56] Setting up /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000 for IP: 192.168.67.2
	I0213 15:44:04.782250   15948 certs.go:190] acquiring lock for shared ca certs: {Name:mkc037f48c69539d66bd92ede4890b05c28518b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:44:04.782420   15948 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.key
	I0213 15:44:04.782487   15948 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18169-6320/.minikube/proxy-client-ca.key
	I0213 15:44:04.782542   15948 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000/client.key
	I0213 15:44:04.782563   15948 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000/client.crt with IP's: []
	I0213 15:44:04.925825   15948 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000/client.crt ...
	I0213 15:44:04.925840   15948 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000/client.crt: {Name:mk209f7e933cda45fe2c10eb1c0221700d7acd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:44:04.926173   15948 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000/client.key ...
	I0213 15:44:04.926182   15948 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000/client.key: {Name:mk27407229e104c368e35f76ebd84228e6cb9add Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:44:04.926387   15948 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000/apiserver.key.c7fa3a9e
	I0213 15:44:04.926401   15948 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0213 15:44:04.983102   15948 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000/apiserver.crt.c7fa3a9e ...
	I0213 15:44:04.983112   15948 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000/apiserver.crt.c7fa3a9e: {Name:mke8c3834a8fa48918caa28cbb5d2544effa1139 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:44:04.983467   15948 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000/apiserver.key.c7fa3a9e ...
	I0213 15:44:04.983476   15948 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000/apiserver.key.c7fa3a9e: {Name:mk2bc316eb3e5bbaeb130bd5065fe1531e4b0348 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:44:04.983850   15948 certs.go:337] copying /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000/apiserver.crt
	I0213 15:44:04.984150   15948 certs.go:341] copying /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000/apiserver.key
	I0213 15:44:04.984355   15948 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000/proxy-client.key
	I0213 15:44:04.984371   15948 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000/proxy-client.crt with IP's: []
	I0213 15:44:05.052585   15948 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000/proxy-client.crt ...
	I0213 15:44:05.052600   15948 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000/proxy-client.crt: {Name:mk67e88d2ceef084f93de994d5915b8b51af61e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:44:05.052910   15948 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000/proxy-client.key ...
	I0213 15:44:05.052920   15948 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000/proxy-client.key: {Name:mk765ee396ac74998e98e9e6668da82a0416303a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:44:05.053341   15948 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/6776.pem (1338 bytes)
	W0213 15:44:05.053418   15948 certs.go:433] ignoring /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/6776_empty.pem, impossibly tiny 0 bytes
	I0213 15:44:05.053432   15948 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 15:44:05.053466   15948 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem (1078 bytes)
	I0213 15:44:05.053496   15948 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem (1123 bytes)
	I0213 15:44:05.053531   15948 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/key.pem (1675 bytes)
	I0213 15:44:05.053596   15948 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem (1708 bytes)
	I0213 15:44:05.054129   15948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 15:44:05.094932   15948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 15:44:05.134076   15948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 15:44:05.173777   15948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 15:44:05.213952   15948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 15:44:05.253722   15948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0213 15:44:05.293967   15948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 15:44:05.334256   15948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 15:44:05.374093   15948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem --> /usr/share/ca-certificates/67762.pem (1708 bytes)
	I0213 15:44:05.413835   15948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 15:44:05.453765   15948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/6776.pem --> /usr/share/ca-certificates/6776.pem (1338 bytes)
	I0213 15:44:05.493808   15948 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 15:44:05.522011   15948 ssh_runner.go:195] Run: openssl version
	I0213 15:44:05.527426   15948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67762.pem && ln -fs /usr/share/ca-certificates/67762.pem /etc/ssl/certs/67762.pem"
	I0213 15:44:05.542872   15948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67762.pem
	I0213 15:44:05.547061   15948 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 23:02 /usr/share/ca-certificates/67762.pem
	I0213 15:44:05.547107   15948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67762.pem
	I0213 15:44:05.553537   15948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67762.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 15:44:05.569165   15948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 15:44:05.584834   15948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 15:44:05.589099   15948 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 22:54 /usr/share/ca-certificates/minikubeCA.pem
	I0213 15:44:05.589146   15948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 15:44:05.595755   15948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 15:44:05.614008   15948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6776.pem && ln -fs /usr/share/ca-certificates/6776.pem /etc/ssl/certs/6776.pem"
	I0213 15:44:05.629966   15948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6776.pem
	I0213 15:44:05.634242   15948 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 23:02 /usr/share/ca-certificates/6776.pem
	I0213 15:44:05.634294   15948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6776.pem
	I0213 15:44:05.641675   15948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6776.pem /etc/ssl/certs/51391683.0"
	I0213 15:44:05.657769   15948 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 15:44:05.662002   15948 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0213 15:44:05.662047   15948 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-108000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-108000 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:44:05.662143   15948 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 15:44:05.682178   15948 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 15:44:05.697853   15948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 15:44:05.712590   15948 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0213 15:44:05.712642   15948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 15:44:05.727038   15948 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 15:44:05.727073   15948 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0213 15:44:05.783282   15948 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0213 15:44:05.783327   15948 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 15:44:06.055159   15948 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 15:44:06.055242   15948 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 15:44:06.055331   15948 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 15:44:06.235411   15948 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 15:44:06.303740   15948 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 15:44:06.311211   15948 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0213 15:44:06.377888   15948 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 15:44:06.420096   15948 out.go:204]   - Generating certificates and keys ...
	I0213 15:44:06.420167   15948 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 15:44:06.420242   15948 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 15:44:06.454497   15948 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0213 15:44:06.631810   15948 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0213 15:44:06.702368   15948 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0213 15:44:06.908110   15948 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0213 15:44:07.098939   15948 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0213 15:44:07.099065   15948 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-108000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0213 15:44:07.363201   15948 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0213 15:44:07.363347   15948 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-108000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0213 15:44:07.508716   15948 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0213 15:44:07.589508   15948 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0213 15:44:07.720877   15948 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0213 15:44:07.721144   15948 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 15:44:07.898956   15948 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 15:44:08.247104   15948 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 15:44:08.364776   15948 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 15:44:08.533504   15948 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 15:44:08.534407   15948 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 15:44:08.558578   15948 out.go:204]   - Booting up control plane ...
	I0213 15:44:08.558735   15948 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 15:44:08.558847   15948 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 15:44:08.558933   15948 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 15:44:08.559085   15948 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 15:44:08.559267   15948 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 15:44:48.543283   15948 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0213 15:44:48.543734   15948 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 15:44:48.543886   15948 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 15:44:53.544633   15948 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 15:44:53.544805   15948 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 15:45:03.545860   15948 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 15:45:03.546130   15948 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 15:45:23.548758   15948 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 15:45:23.548971   15948 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 15:46:03.550850   15948 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 15:46:03.551265   15948 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 15:46:03.551291   15948 kubeadm.go:322] 
	I0213 15:46:03.551381   15948 kubeadm.go:322] Unfortunately, an error has occurred:
	I0213 15:46:03.551472   15948 kubeadm.go:322] 	timed out waiting for the condition
	I0213 15:46:03.551483   15948 kubeadm.go:322] 
	I0213 15:46:03.551520   15948 kubeadm.go:322] This error is likely caused by:
	I0213 15:46:03.551576   15948 kubeadm.go:322] 	- The kubelet is not running
	I0213 15:46:03.551759   15948 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0213 15:46:03.551773   15948 kubeadm.go:322] 
	I0213 15:46:03.551894   15948 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0213 15:46:03.551940   15948 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0213 15:46:03.551974   15948 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0213 15:46:03.551981   15948 kubeadm.go:322] 
	I0213 15:46:03.552089   15948 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0213 15:46:03.552193   15948 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0213 15:46:03.552290   15948 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0213 15:46:03.552343   15948 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0213 15:46:03.552474   15948 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0213 15:46:03.552524   15948 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0213 15:46:03.556674   15948 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0213 15:46:03.556736   15948 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0213 15:46:03.556893   15948 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0213 15:46:03.556996   15948 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 15:46:03.557084   15948 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0213 15:46:03.557185   15948 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0213 15:46:03.557280   15948 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-108000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-108000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-108000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-108000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0213 15:46:03.557318   15948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0213 15:46:03.976540   15948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 15:46:03.993658   15948 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0213 15:46:03.993717   15948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 15:46:04.008843   15948 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 15:46:04.008870   15948 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0213 15:46:04.060118   15948 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0213 15:46:04.060163   15948 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 15:46:04.306973   15948 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 15:46:04.307131   15948 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 15:46:04.307282   15948 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 15:46:04.478571   15948 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 15:46:04.479102   15948 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 15:46:04.485489   15948 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0213 15:46:04.557974   15948 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 15:46:04.581508   15948 out.go:204]   - Generating certificates and keys ...
	I0213 15:46:04.581609   15948 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 15:46:04.581658   15948 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 15:46:04.581807   15948 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 15:46:04.581902   15948 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 15:46:04.582003   15948 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 15:46:04.582098   15948 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 15:46:04.582184   15948 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 15:46:04.582237   15948 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 15:46:04.582305   15948 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 15:46:04.582405   15948 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 15:46:04.582437   15948 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 15:46:04.582491   15948 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 15:46:04.811152   15948 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 15:46:05.155756   15948 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 15:46:05.314939   15948 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 15:46:05.546833   15948 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 15:46:05.547433   15948 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 15:46:05.569159   15948 out.go:204]   - Booting up control plane ...
	I0213 15:46:05.569256   15948 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 15:46:05.569329   15948 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 15:46:05.569382   15948 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 15:46:05.569451   15948 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 15:46:05.569582   15948 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 15:46:45.555981   15948 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0213 15:46:45.556533   15948 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 15:46:45.556684   15948 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 15:46:50.559533   15948 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 15:46:50.559769   15948 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 15:47:00.560571   15948 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 15:47:00.560801   15948 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 15:47:20.561023   15948 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 15:47:20.561189   15948 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 15:48:00.562240   15948 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 15:48:00.562546   15948 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 15:48:00.562559   15948 kubeadm.go:322] 
	I0213 15:48:00.562622   15948 kubeadm.go:322] Unfortunately, an error has occurred:
	I0213 15:48:00.562673   15948 kubeadm.go:322] 	timed out waiting for the condition
	I0213 15:48:00.562687   15948 kubeadm.go:322] 
	I0213 15:48:00.562796   15948 kubeadm.go:322] This error is likely caused by:
	I0213 15:48:00.562838   15948 kubeadm.go:322] 	- The kubelet is not running
	I0213 15:48:00.562955   15948 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0213 15:48:00.562965   15948 kubeadm.go:322] 
	I0213 15:48:00.563081   15948 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0213 15:48:00.563118   15948 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0213 15:48:00.563162   15948 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0213 15:48:00.563171   15948 kubeadm.go:322] 
	I0213 15:48:00.563283   15948 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0213 15:48:00.563435   15948 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0213 15:48:00.563579   15948 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0213 15:48:00.563646   15948 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0213 15:48:00.563741   15948 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0213 15:48:00.563778   15948 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0213 15:48:00.571726   15948 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0213 15:48:00.571888   15948 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0213 15:48:00.572059   15948 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0213 15:48:00.572178   15948 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 15:48:00.572288   15948 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0213 15:48:00.572373   15948 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0213 15:48:00.572424   15948 kubeadm.go:406] StartCluster complete in 3m54.914987779s
	I0213 15:48:00.572550   15948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:48:00.595224   15948 logs.go:276] 0 containers: []
	W0213 15:48:00.595240   15948 logs.go:278] No container was found matching "kube-apiserver"
	I0213 15:48:00.595310   15948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:48:00.614541   15948 logs.go:276] 0 containers: []
	W0213 15:48:00.614555   15948 logs.go:278] No container was found matching "etcd"
	I0213 15:48:00.614631   15948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:48:00.639922   15948 logs.go:276] 0 containers: []
	W0213 15:48:00.639945   15948 logs.go:278] No container was found matching "coredns"
	I0213 15:48:00.640065   15948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:48:00.665054   15948 logs.go:276] 0 containers: []
	W0213 15:48:00.665073   15948 logs.go:278] No container was found matching "kube-scheduler"
	I0213 15:48:00.665164   15948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:48:00.683035   15948 logs.go:276] 0 containers: []
	W0213 15:48:00.683064   15948 logs.go:278] No container was found matching "kube-proxy"
	I0213 15:48:00.683140   15948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:48:00.701618   15948 logs.go:276] 0 containers: []
	W0213 15:48:00.701632   15948 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 15:48:00.701710   15948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:48:00.719882   15948 logs.go:276] 0 containers: []
	W0213 15:48:00.719896   15948 logs.go:278] No container was found matching "kindnet"
	I0213 15:48:00.719905   15948 logs.go:123] Gathering logs for kubelet ...
	I0213 15:48:00.719915   15948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:48:00.776087   15948 logs.go:123] Gathering logs for dmesg ...
	I0213 15:48:00.776104   15948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:48:00.796381   15948 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:48:00.796397   15948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 15:48:00.919763   15948 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 15:48:00.919780   15948 logs.go:123] Gathering logs for Docker ...
	I0213 15:48:00.919790   15948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:48:00.942507   15948 logs.go:123] Gathering logs for container status ...
	I0213 15:48:00.942520   15948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0213 15:48:01.005047   15948 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0213 15:48:01.005069   15948 out.go:239] * 
	* 
	W0213 15:48:01.005107   15948 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0213 15:48:01.005122   15948 out.go:239] * 
	* 
	W0213 15:48:01.005834   15948 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:48:01.071497   15948 out.go:177] 
	W0213 15:48:01.114573   15948 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0213 15:48:01.114616   15948 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0213 15:48:01.114632   15948 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0213 15:48:01.157317   15948 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-108000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-108000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-108000: (1.60772841s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-108000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-108000 status --format={{.Host}}: exit status 7 (119.332421ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-108000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:243: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-108000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker : (30.565458916s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-108000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-108000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-108000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (438.916544ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-108000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18169
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18169-6320/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18169-6320/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-108000
	    minikube start -p kubernetes-upgrade-108000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1080002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-108000 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-108000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:275: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-108000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker : (34.092388798s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-02-13 15:49:08.139951 -0800 PST m=+3398.274676757
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-108000
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-108000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "164616ae035dd3ade7c460d540675b5e95fd812708efa256e2b97b9fe4b37fa1",
	        "Created": "2024-02-13T23:43:47.14647948Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 270113,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-13T23:48:04.956636831Z",
	            "FinishedAt": "2024-02-13T23:48:01.700511602Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/164616ae035dd3ade7c460d540675b5e95fd812708efa256e2b97b9fe4b37fa1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/164616ae035dd3ade7c460d540675b5e95fd812708efa256e2b97b9fe4b37fa1/hostname",
	        "HostsPath": "/var/lib/docker/containers/164616ae035dd3ade7c460d540675b5e95fd812708efa256e2b97b9fe4b37fa1/hosts",
	        "LogPath": "/var/lib/docker/containers/164616ae035dd3ade7c460d540675b5e95fd812708efa256e2b97b9fe4b37fa1/164616ae035dd3ade7c460d540675b5e95fd812708efa256e2b97b9fe4b37fa1-json.log",
	        "Name": "/kubernetes-upgrade-108000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-108000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-108000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/48d0999e1094a45af2b54ae626981b0bab6c3f6bcf4773c0addb32002ac948d2-init/diff:/var/lib/docker/overlay2/17d01b22a52da825ae58e67decfe3f4c8ae2f6fe80510c1be556e233e058ce7a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/48d0999e1094a45af2b54ae626981b0bab6c3f6bcf4773c0addb32002ac948d2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/48d0999e1094a45af2b54ae626981b0bab6c3f6bcf4773c0addb32002ac948d2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/48d0999e1094a45af2b54ae626981b0bab6c3f6bcf4773c0addb32002ac948d2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-108000",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-108000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-108000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-108000",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-108000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5d034f659e30696e6cac72811c80bfc9d6eb6761ea39c84237857c26901a54e5",
	            "SandboxKey": "/var/run/docker/netns/5d034f659e30",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55039"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55040"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55041"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55042"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55038"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-108000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "164616ae035d",
	                        "kubernetes-upgrade-108000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "eecfabfff7e58a950e2e5a586acea3335e78d46c99a3323550d95d5647997b39",
	                    "EndpointID": "75fecf34a3a6b2a9d13c14505eb1309efc6a9c4e4b31979b9374301e85cf3230",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "kubernetes-upgrade-108000",
	                        "164616ae035d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-108000 -n kubernetes-upgrade-108000
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-108000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-108000 logs -n 25: (2.817383335s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | stopped-upgrade-680000 stop       | minikube                  | jenkins | v1.26.0 | 13 Feb 24 15:45 PST | 13 Feb 24 15:46 PST |
	| start   | -p stopped-upgrade-680000         | stopped-upgrade-680000    | jenkins | v1.32.0 | 13 Feb 24 15:46 PST | 13 Feb 24 15:46 PST |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-680000         | stopped-upgrade-680000    | jenkins | v1.32.0 | 13 Feb 24 15:46 PST | 13 Feb 24 15:46 PST |
	| start   | -p pause-219000 --memory=2048     | pause-219000              | jenkins | v1.32.0 | 13 Feb 24 15:46 PST | 13 Feb 24 15:47 PST |
	|         | --install-addons=false            |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker        |                           |         |         |                     |                     |
	| start   | -p pause-219000                   | pause-219000              | jenkins | v1.32.0 | 13 Feb 24 15:47 PST | 13 Feb 24 15:47 PST |
	|         | --alsologtostderr -v=1            |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	| pause   | -p pause-219000                   | pause-219000              | jenkins | v1.32.0 | 13 Feb 24 15:47 PST | 13 Feb 24 15:47 PST |
	|         | --alsologtostderr -v=5            |                           |         |         |                     |                     |
	| unpause | -p pause-219000                   | pause-219000              | jenkins | v1.32.0 | 13 Feb 24 15:47 PST | 13 Feb 24 15:47 PST |
	|         | --alsologtostderr -v=5            |                           |         |         |                     |                     |
	| pause   | -p pause-219000                   | pause-219000              | jenkins | v1.32.0 | 13 Feb 24 15:47 PST | 13 Feb 24 15:48 PST |
	|         | --alsologtostderr -v=5            |                           |         |         |                     |                     |
	| delete  | -p pause-219000                   | pause-219000              | jenkins | v1.32.0 | 13 Feb 24 15:48 PST | 13 Feb 24 15:48 PST |
	|         | --alsologtostderr -v=5            |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-108000      | kubernetes-upgrade-108000 | jenkins | v1.32.0 | 13 Feb 24 15:48 PST | 13 Feb 24 15:48 PST |
	| start   | -p kubernetes-upgrade-108000      | kubernetes-upgrade-108000 | jenkins | v1.32.0 | 13 Feb 24 15:48 PST | 13 Feb 24 15:48 PST |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	| delete  | -p pause-219000                   | pause-219000              | jenkins | v1.32.0 | 13 Feb 24 15:48 PST | 13 Feb 24 15:48 PST |
	| start   | -p NoKubernetes-739000            | NoKubernetes-739000       | jenkins | v1.32.0 | 13 Feb 24 15:48 PST |                     |
	|         | --no-kubernetes                   |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20         |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-739000            | NoKubernetes-739000       | jenkins | v1.32.0 | 13 Feb 24 15:48 PST | 13 Feb 24 15:48 PST |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-739000            | NoKubernetes-739000       | jenkins | v1.32.0 | 13 Feb 24 15:48 PST | 13 Feb 24 15:48 PST |
	|         | --no-kubernetes                   |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-108000      | kubernetes-upgrade-108000 | jenkins | v1.32.0 | 13 Feb 24 15:48 PST |                     |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-108000      | kubernetes-upgrade-108000 | jenkins | v1.32.0 | 13 Feb 24 15:48 PST | 13 Feb 24 15:49 PST |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-739000            | NoKubernetes-739000       | jenkins | v1.32.0 | 13 Feb 24 15:48 PST | 13 Feb 24 15:48 PST |
	| start   | -p NoKubernetes-739000            | NoKubernetes-739000       | jenkins | v1.32.0 | 13 Feb 24 15:48 PST | 13 Feb 24 15:48 PST |
	|         | --no-kubernetes                   |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-739000 sudo       | NoKubernetes-739000       | jenkins | v1.32.0 | 13 Feb 24 15:48 PST |                     |
	|         | systemctl is-active --quiet       |                           |         |         |                     |                     |
	|         | service kubelet                   |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-739000            | NoKubernetes-739000       | jenkins | v1.32.0 | 13 Feb 24 15:48 PST | 13 Feb 24 15:48 PST |
	| start   | -p NoKubernetes-739000            | NoKubernetes-739000       | jenkins | v1.32.0 | 13 Feb 24 15:48 PST | 13 Feb 24 15:48 PST |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-739000 sudo       | NoKubernetes-739000       | jenkins | v1.32.0 | 13 Feb 24 15:48 PST |                     |
	|         | systemctl is-active --quiet       |                           |         |         |                     |                     |
	|         | service kubelet                   |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-739000            | NoKubernetes-739000       | jenkins | v1.32.0 | 13 Feb 24 15:48 PST | 13 Feb 24 15:49 PST |
	| start   | -p auto-208000 --memory=3072      | auto-208000               | jenkins | v1.32.0 | 13 Feb 24 15:49 PST |                     |
	|         | --alsologtostderr --wait=true     |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|---------|-----------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 15:49:01
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 15:49:01.468166   17751 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:49:01.468368   17751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:49:01.468373   17751 out.go:304] Setting ErrFile to fd 2...
	I0213 15:49:01.468378   17751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:49:01.468547   17751 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18169-6320/.minikube/bin
	I0213 15:49:01.470101   17751 out.go:298] Setting JSON to false
	I0213 15:49:01.495508   17751 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5001,"bootTime":1707863140,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0213 15:49:01.495684   17751 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:49:01.517314   17751 out.go:177] * [auto-208000] minikube v1.32.0 on Darwin 14.3.1
	I0213 15:49:01.584253   17751 out.go:177]   - MINIKUBE_LOCATION=18169
	I0213 15:49:01.559543   17751 notify.go:220] Checking for updates...
	I0213 15:49:01.626975   17751 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18169-6320/kubeconfig
	I0213 15:49:01.670154   17751 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0213 15:49:01.712199   17751 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:49:01.756005   17751 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18169-6320/.minikube
	I0213 15:49:01.798359   17751 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:49:01.820571   17751 config.go:182] Loaded profile config "kubernetes-upgrade-108000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0213 15:49:01.820677   17751 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:49:01.880268   17751 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0213 15:49:01.880433   17751 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 15:49:01.983405   17751 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-13 23:49:01.971727525 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 15:49:02.006319   17751 out.go:177] * Using the docker driver based on user configuration
	I0213 15:49:02.080097   17751 start.go:298] selected driver: docker
	I0213 15:49:02.080120   17751 start.go:902] validating driver "docker" against <nil>
	I0213 15:49:02.080138   17751 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:49:02.083967   17751 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 15:49:02.212194   17751 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-13 23:49:02.198369282 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 15:49:02.212416   17751 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 15:49:02.212650   17751 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 15:49:02.237203   17751 out.go:177] * Using Docker Desktop driver with root privileges
	I0213 15:49:02.258291   17751 cni.go:84] Creating CNI manager for ""
	I0213 15:49:02.258323   17751 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 15:49:02.258334   17751 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0213 15:49:02.258352   17751 start_flags.go:321] config:
	{Name:auto-208000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-208000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:49:02.282066   17751 out.go:177] * Starting control plane node auto-208000 in cluster auto-208000
	I0213 15:49:02.319513   17751 cache.go:121] Beginning downloading kic base image for docker with docker
	I0213 15:49:02.341257   17751 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0213 15:49:02.399234   17751 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 15:49:02.399266   17751 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0213 15:49:02.399287   17751 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0213 15:49:02.399298   17751 cache.go:56] Caching tarball of preloaded images
	I0213 15:49:02.399436   17751 preload.go:174] Found /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0213 15:49:02.399454   17751 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0213 15:49:02.399980   17751 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/auto-208000/config.json ...
	I0213 15:49:02.400168   17751 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/auto-208000/config.json: {Name:mkc92d28309f93c10c1292ce67be5c041b06675e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:49:02.462149   17751 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0213 15:49:02.462272   17751 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0213 15:49:02.462296   17751 cache.go:194] Successfully downloaded all kic artifacts
	I0213 15:49:02.462341   17751 start.go:365] acquiring machines lock for auto-208000: {Name:mkbbbc13392e6016d43b295302c6ad9869c832ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:49:02.462494   17751 start.go:369] acquired machines lock for "auto-208000" in 140.171µs
	I0213 15:49:02.462521   17751 start.go:93] Provisioning new machine with config: &{Name:auto-208000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-208000 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:49:02.462747   17751 start.go:125] createHost starting for "" (driver="docker")
	I0213 15:48:59.147369   17379 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55038/healthz ...
	I0213 15:48:59.152297   17379 api_server.go:279] https://127.0.0.1:55038/healthz returned 200:
	ok
	I0213 15:48:59.164922   17379 system_pods.go:86] 5 kube-system pods found
	I0213 15:48:59.164942   17379 system_pods.go:89] "etcd-kubernetes-upgrade-108000" [6f455d5e-fda9-47dd-a106-c6fa7780e56a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 15:48:59.164949   17379 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-108000" [ca1e9eb0-cd22-414b-984a-1e04559af854] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 15:48:59.164959   17379 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-108000" [4f293c6e-636d-4c51-95f4-d028afddedaf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 15:48:59.164966   17379 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-108000" [148d707a-0f7c-4af8-b139-08f43348adb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 15:48:59.164972   17379 system_pods.go:89] "storage-provisioner" [ef7eb8c0-0607-40cd-bf6d-7aadca2d4f05] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0213 15:48:59.164978   17379 kubeadm.go:620] needs reconfigure: missing components: kube-dns, kube-proxy
	I0213 15:48:59.164987   17379 kubeadm.go:1135] stopping kube-system containers ...
	I0213 15:48:59.165048   17379 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 15:48:59.185376   17379 docker.go:483] Stopping containers: [62a004cf27b1 5d0bc442024d 7c0ab1301d1f f1d764758598 420670df85fa ccfdf2f81a8f ef7e0400d4a7 b254c3cf2375 20ddadfc01a3 c15434b5d9b2 8af7b3a65ebf b2e70870a9cb f353c2061530 876c8944f5c4 5fef37f3764c 929001467fc8]
	I0213 15:48:59.185463   17379 ssh_runner.go:195] Run: docker stop 62a004cf27b1 5d0bc442024d 7c0ab1301d1f f1d764758598 420670df85fa ccfdf2f81a8f ef7e0400d4a7 b254c3cf2375 20ddadfc01a3 c15434b5d9b2 8af7b3a65ebf b2e70870a9cb f353c2061530 876c8944f5c4 5fef37f3764c 929001467fc8
	I0213 15:48:59.994124   17379 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 15:49:00.034053   17379 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 15:49:00.097253   17379 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5703 Feb 13 23:46 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5743 Feb 13 23:46 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5823 Feb 13 23:46 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5691 Feb 13 23:46 /etc/kubernetes/scheduler.conf
	
	I0213 15:49:00.097413   17379 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0213 15:49:00.120523   17379 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0213 15:49:00.199148   17379 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0213 15:49:00.221444   17379 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0213 15:49:00.236360   17379 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 15:49:00.252356   17379 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 15:49:00.252373   17379 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 15:49:00.302796   17379 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 15:49:01.198861   17379 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 15:49:01.350291   17379 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 15:49:01.412373   17379 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 15:49:01.511338   17379 api_server.go:52] waiting for apiserver process to appear ...
	I0213 15:49:01.511469   17379 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 15:49:02.012338   17379 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 15:49:02.511866   17379 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 15:49:02.534864   17379 api_server.go:72] duration metric: took 1.023546245s to wait for apiserver process to appear ...
	I0213 15:49:02.534885   17379 api_server.go:88] waiting for apiserver healthz status ...
	I0213 15:49:02.534901   17379 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55038/healthz ...
	I0213 15:49:02.484768   17751 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0213 15:49:02.485142   17751 start.go:159] libmachine.API.Create for "auto-208000" (driver="docker")
	I0213 15:49:02.485193   17751 client.go:168] LocalClient.Create starting
	I0213 15:49:02.485386   17751 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem
	I0213 15:49:02.485477   17751 main.go:141] libmachine: Decoding PEM data...
	I0213 15:49:02.485514   17751 main.go:141] libmachine: Parsing certificate...
	I0213 15:49:02.485625   17751 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem
	I0213 15:49:02.485701   17751 main.go:141] libmachine: Decoding PEM data...
	I0213 15:49:02.485719   17751 main.go:141] libmachine: Parsing certificate...
	I0213 15:49:02.486531   17751 cli_runner.go:164] Run: docker network inspect auto-208000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0213 15:49:02.549971   17751 cli_runner.go:211] docker network inspect auto-208000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0213 15:49:02.550065   17751 network_create.go:281] running [docker network inspect auto-208000] to gather additional debugging logs...
	I0213 15:49:02.550078   17751 cli_runner.go:164] Run: docker network inspect auto-208000
	W0213 15:49:02.603096   17751 cli_runner.go:211] docker network inspect auto-208000 returned with exit code 1
	I0213 15:49:02.603129   17751 network_create.go:284] error running [docker network inspect auto-208000]: docker network inspect auto-208000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-208000 not found
	I0213 15:49:02.603138   17751 network_create.go:286] output of [docker network inspect auto-208000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-208000 not found
	
	** /stderr **
	I0213 15:49:02.603306   17751 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0213 15:49:02.662881   17751 network.go:210] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0213 15:49:02.663286   17751 network.go:207] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002260e70}
	I0213 15:49:02.663303   17751 network_create.go:124] attempt to create docker network auto-208000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I0213 15:49:02.663378   17751 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-208000 auto-208000
	W0213 15:49:02.724642   17751 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-208000 auto-208000 returned with exit code 1
	W0213 15:49:02.724683   17751 network_create.go:149] failed to create docker network auto-208000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-208000 auto-208000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0213 15:49:02.724714   17751 network_create.go:116] failed to create docker network auto-208000 192.168.58.0/24, will retry: subnet is taken
	I0213 15:49:02.726189   17751 network.go:210] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0213 15:49:02.726636   17751 network.go:207] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002184fd0}
	I0213 15:49:02.726653   17751 network_create.go:124] attempt to create docker network auto-208000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0213 15:49:02.726759   17751 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-208000 auto-208000
	W0213 15:49:02.780149   17751 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-208000 auto-208000 returned with exit code 1
	W0213 15:49:02.780185   17751 network_create.go:149] failed to create docker network auto-208000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-208000 auto-208000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0213 15:49:02.780201   17751 network_create.go:116] failed to create docker network auto-208000 192.168.67.0/24, will retry: subnet is taken
	I0213 15:49:02.781614   17751 network.go:210] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0213 15:49:02.782007   17751 network.go:207] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023535e0}
	I0213 15:49:02.782021   17751 network_create.go:124] attempt to create docker network auto-208000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0213 15:49:02.782093   17751 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-208000 auto-208000
	I0213 15:49:02.881961   17751 network_create.go:108] docker network auto-208000 192.168.76.0/24 created
	I0213 15:49:02.882010   17751 kic.go:121] calculated static IP "192.168.76.2" for the "auto-208000" container
	I0213 15:49:02.882127   17751 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0213 15:49:02.939895   17751 cli_runner.go:164] Run: docker volume create auto-208000 --label name.minikube.sigs.k8s.io=auto-208000 --label created_by.minikube.sigs.k8s.io=true
	I0213 15:49:02.995254   17751 oci.go:103] Successfully created a docker volume auto-208000
	I0213 15:49:02.995373   17751 cli_runner.go:164] Run: docker run --rm --name auto-208000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-208000 --entrypoint /usr/bin/test -v auto-208000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0213 15:49:03.437633   17751 oci.go:107] Successfully prepared a docker volume auto-208000
	I0213 15:49:03.437682   17751 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 15:49:03.437695   17751 kic.go:194] Starting extracting preloaded images to volume ...
	I0213 15:49:03.437793   17751 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-208000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0213 15:49:06.044881   17751 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-208000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (2.605794719s)
	I0213 15:49:06.044918   17751 kic.go:203] duration metric: took 2.605977 seconds to extract preloaded images to volume
	I0213 15:49:06.045038   17751 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0213 15:49:06.218833   17751 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-208000 --name auto-208000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-208000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-208000 --network auto-208000 --ip 192.168.76.2 --volume auto-208000:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0213 15:49:05.251068   17379 api_server.go:279] https://127.0.0.1:55038/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 15:49:05.251094   17379 api_server.go:103] status: https://127.0.0.1:55038/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 15:49:05.251106   17379 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55038/healthz ...
	I0213 15:49:05.295223   17379 api_server.go:279] https://127.0.0.1:55038/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0213 15:49:05.295257   17379 api_server.go:103] status: https://127.0.0.1:55038/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0213 15:49:05.536192   17379 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55038/healthz ...
	I0213 15:49:05.861083   17379 api_server.go:279] https://127.0.0.1:55038/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 15:49:05.861105   17379 api_server.go:103] status: https://127.0.0.1:55038/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 15:49:06.036273   17379 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55038/healthz ...
	I0213 15:49:06.042587   17379 api_server.go:279] https://127.0.0.1:55038/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 15:49:06.042605   17379 api_server.go:103] status: https://127.0.0.1:55038/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 15:49:06.537688   17379 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55038/healthz ...
	I0213 15:49:06.543595   17379 api_server.go:279] https://127.0.0.1:55038/healthz returned 200:
	ok
	I0213 15:49:06.551710   17379 api_server.go:141] control plane version: v1.29.0-rc.2
	I0213 15:49:06.551731   17379 api_server.go:131] duration metric: took 4.014766247s to wait for apiserver health ...
	I0213 15:49:06.551740   17379 cni.go:84] Creating CNI manager for ""
	I0213 15:49:06.551750   17379 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 15:49:06.572760   17379 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 15:49:06.615139   17379 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 15:49:06.635273   17379 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 15:49:06.672671   17379 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 15:49:06.682463   17379 system_pods.go:59] 5 kube-system pods found
	I0213 15:49:06.682480   17379 system_pods.go:61] "etcd-kubernetes-upgrade-108000" [6f455d5e-fda9-47dd-a106-c6fa7780e56a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 15:49:06.682491   17379 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-108000" [ca1e9eb0-cd22-414b-984a-1e04559af854] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 15:49:06.682503   17379 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-108000" [4f293c6e-636d-4c51-95f4-d028afddedaf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 15:49:06.682510   17379 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-108000" [148d707a-0f7c-4af8-b139-08f43348adb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 15:49:06.682528   17379 system_pods.go:61] "storage-provisioner" [ef7eb8c0-0607-40cd-bf6d-7aadca2d4f05] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0213 15:49:06.682535   17379 system_pods.go:74] duration metric: took 9.834514ms to wait for pod list to return data ...
	I0213 15:49:06.682541   17379 node_conditions.go:102] verifying NodePressure condition ...
	I0213 15:49:06.687252   17379 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0213 15:49:06.687282   17379 node_conditions.go:123] node cpu capacity is 12
	I0213 15:49:06.687298   17379 node_conditions.go:105] duration metric: took 4.743803ms to run NodePressure ...
	I0213 15:49:06.687318   17379 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 15:49:06.964264   17379 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 15:49:06.979396   17379 ops.go:34] apiserver oom_adj: -16
	I0213 15:49:06.979426   17379 kubeadm.go:640] restartCluster took 16.502283745s
	I0213 15:49:06.979444   17379 kubeadm.go:406] StartCluster complete in 16.536357312s
	I0213 15:49:06.979469   17379 settings.go:142] acquiring lock: {Name:mk73e2877e5f833d3067188c2d2115030ace2af4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:49:06.979588   17379 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18169-6320/kubeconfig
	I0213 15:49:06.980517   17379 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/kubeconfig: {Name:mk44cd4b9e88d1002bf6fa3af05bfaa649323b25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:49:06.981059   17379 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 15:49:06.981100   17379 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 15:49:06.981191   17379 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-108000"
	I0213 15:49:06.981229   17379 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-108000"
	W0213 15:49:06.981237   17379 addons.go:243] addon storage-provisioner should already be in state true
	I0213 15:49:06.981301   17379 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-108000"
	I0213 15:49:06.981318   17379 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-108000"
	I0213 15:49:06.981366   17379 host.go:66] Checking if "kubernetes-upgrade-108000" exists ...
	I0213 15:49:06.981735   17379 config.go:182] Loaded profile config "kubernetes-upgrade-108000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0213 15:49:06.981741   17379 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-108000 --format={{.State.Status}}
	I0213 15:49:06.981728   17379 kapi.go:59] client config for kubernetes-upgrade-108000: &rest.Config{Host:"https://127.0.0.1:55038", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000/client.key", CAFile:"/Users/jenkins/minikube-integration/18169-6320/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f7ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 15:49:06.984167   17379 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-108000 --format={{.State.Status}}
	I0213 15:49:06.991899   17379 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubernetes-upgrade-108000" context rescaled to 1 replicas
	I0213 15:49:06.992327   17379 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:49:07.039198   17379 out.go:177] * Verifying Kubernetes components...
	I0213 15:49:07.077413   17379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 15:49:07.161260   17379 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 15:49:07.140992   17379 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0213 15:49:07.141017   17379 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-108000
	I0213 15:49:07.181721   17379 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 15:49:07.181741   17379 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 15:49:07.182599   17379 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-108000
	I0213 15:49:07.193135   17379 kapi.go:59] client config for kubernetes-upgrade-108000: &rest.Config{Host:"https://127.0.0.1:55038", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubernetes-upgrade-108000/client.key", CAFile:"/Users/jenkins/minikube-integration/18169-6320/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f7ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 15:49:07.193503   17379 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-108000"
	W0213 15:49:07.193523   17379 addons.go:243] addon default-storageclass should already be in state true
	I0213 15:49:07.193543   17379 host.go:66] Checking if "kubernetes-upgrade-108000" exists ...
	I0213 15:49:07.193921   17379 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-108000 --format={{.State.Status}}
	I0213 15:49:07.261973   17379 api_server.go:52] waiting for apiserver process to appear ...
	I0213 15:49:07.262224   17379 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 15:49:07.266295   17379 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55039 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/kubernetes-upgrade-108000/id_rsa Username:docker}
	I0213 15:49:07.276908   17379 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 15:49:07.276929   17379 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 15:49:07.277079   17379 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-108000
	I0213 15:49:07.313324   17379 api_server.go:72] duration metric: took 320.437557ms to wait for apiserver process to appear ...
	I0213 15:49:07.313361   17379 api_server.go:88] waiting for apiserver healthz status ...
	I0213 15:49:07.313394   17379 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55038/healthz ...
	I0213 15:49:07.320151   17379 api_server.go:279] https://127.0.0.1:55038/healthz returned 200:
	ok
	I0213 15:49:07.322440   17379 api_server.go:141] control plane version: v1.29.0-rc.2
	I0213 15:49:07.322458   17379 api_server.go:131] duration metric: took 9.076542ms to wait for apiserver health ...
	I0213 15:49:07.322464   17379 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 15:49:07.327425   17379 system_pods.go:59] 5 kube-system pods found
	I0213 15:49:07.327451   17379 system_pods.go:61] "etcd-kubernetes-upgrade-108000" [6f455d5e-fda9-47dd-a106-c6fa7780e56a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 15:49:07.327476   17379 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-108000" [ca1e9eb0-cd22-414b-984a-1e04559af854] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 15:49:07.327485   17379 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-108000" [4f293c6e-636d-4c51-95f4-d028afddedaf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 15:49:07.327495   17379 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-108000" [148d707a-0f7c-4af8-b139-08f43348adb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 15:49:07.327505   17379 system_pods.go:61] "storage-provisioner" [ef7eb8c0-0607-40cd-bf6d-7aadca2d4f05] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0213 15:49:07.327516   17379 system_pods.go:74] duration metric: took 5.039811ms to wait for pod list to return data ...
	I0213 15:49:07.327531   17379 kubeadm.go:581] duration metric: took 334.626467ms to wait for : map[apiserver:true system_pods:true] ...
	I0213 15:49:07.327542   17379 node_conditions.go:102] verifying NodePressure condition ...
	I0213 15:49:07.331348   17379 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0213 15:49:07.331366   17379 node_conditions.go:123] node cpu capacity is 12
	I0213 15:49:07.331386   17379 node_conditions.go:105] duration metric: took 3.830397ms to run NodePressure ...
	I0213 15:49:07.331395   17379 start.go:228] waiting for startup goroutines ...
	I0213 15:49:07.355092   17379 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55039 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/kubernetes-upgrade-108000/id_rsa Username:docker}
	I0213 15:49:07.407437   17379 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 15:49:07.489569   17379 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 15:49:07.996095   17379 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0213 15:49:08.016862   17379 addons.go:505] enable addons completed in 1.034170607s: enabled=[storage-provisioner default-storageclass]
	I0213 15:49:08.016887   17379 start.go:233] waiting for cluster config update ...
	I0213 15:49:08.016904   17379 start.go:242] writing updated cluster config ...
	I0213 15:49:08.017319   17379 ssh_runner.go:195] Run: rm -f paused
	I0213 15:49:08.064111   17379 start.go:600] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0213 15:49:08.084891   17379 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-108000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 13 23:48:49 kubernetes-upgrade-108000 cri-dockerd[3657]: time="2024-02-13T23:48:49Z" level=info msg="Setting cgroupDriver cgroupfs"
	Feb 13 23:48:49 kubernetes-upgrade-108000 cri-dockerd[3657]: time="2024-02-13T23:48:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Feb 13 23:48:49 kubernetes-upgrade-108000 cri-dockerd[3657]: time="2024-02-13T23:48:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Feb 13 23:48:49 kubernetes-upgrade-108000 cri-dockerd[3657]: time="2024-02-13T23:48:49Z" level=info msg="Start cri-dockerd grpc backend"
	Feb 13 23:48:49 kubernetes-upgrade-108000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Feb 13 23:48:54 kubernetes-upgrade-108000 cri-dockerd[3657]: time="2024-02-13T23:48:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b254c3cf23755cbf240f69a64e6034e19848d93e1531a181381e4f6ca5c1c65d/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 13 23:48:54 kubernetes-upgrade-108000 cri-dockerd[3657]: time="2024-02-13T23:48:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/420670df85fa09e47d57d9f2b3ef5f0a754a696ce336d4aee7e85ab4f74b173b/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 13 23:48:54 kubernetes-upgrade-108000 cri-dockerd[3657]: time="2024-02-13T23:48:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ef7e0400d4a7cb289d24811e88c22b8f2680a66307af688c1ab9e667ac25a84f/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 13 23:48:54 kubernetes-upgrade-108000 cri-dockerd[3657]: time="2024-02-13T23:48:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ccfdf2f81a8f41353b02ba2da55e404c0acd2bcd219aa7d065b2e6ad43571db4/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 13 23:48:59 kubernetes-upgrade-108000 dockerd[3430]: time="2024-02-13T23:48:59.287615882Z" level=info msg="ignoring event" container=5d0bc442024dd647175e20c3a4d5eeedfcc4c3327e7e626b43d81324007b38c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 13 23:48:59 kubernetes-upgrade-108000 dockerd[3430]: time="2024-02-13T23:48:59.289658950Z" level=info msg="ignoring event" container=b254c3cf23755cbf240f69a64e6034e19848d93e1531a181381e4f6ca5c1c65d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 13 23:48:59 kubernetes-upgrade-108000 dockerd[3430]: time="2024-02-13T23:48:59.295145256Z" level=info msg="ignoring event" container=ef7e0400d4a7cb289d24811e88c22b8f2680a66307af688c1ab9e667ac25a84f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 13 23:48:59 kubernetes-upgrade-108000 dockerd[3430]: time="2024-02-13T23:48:59.297448966Z" level=info msg="ignoring event" container=f1d764758598e496fb89dec87fb0a94d8b3917fdbbbca4add05f86d5775adf0f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 13 23:48:59 kubernetes-upgrade-108000 dockerd[3430]: time="2024-02-13T23:48:59.297574573Z" level=info msg="ignoring event" container=ccfdf2f81a8f41353b02ba2da55e404c0acd2bcd219aa7d065b2e6ad43571db4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 13 23:48:59 kubernetes-upgrade-108000 dockerd[3430]: time="2024-02-13T23:48:59.297929878Z" level=info msg="ignoring event" container=420670df85fa09e47d57d9f2b3ef5f0a754a696ce336d4aee7e85ab4f74b173b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 13 23:48:59 kubernetes-upgrade-108000 dockerd[3430]: time="2024-02-13T23:48:59.310387935Z" level=info msg="ignoring event" container=7c0ab1301d1f6bc4249ed0804f59e0cbe27683429c04937b7ee331e0b8072eb3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 13 23:48:59 kubernetes-upgrade-108000 dockerd[3430]: time="2024-02-13T23:48:59.951132468Z" level=info msg="ignoring event" container=62a004cf27b1d54cd7c9f5ad3350304c4b44ac8ebe8e3e601a970321bec7def5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 13 23:49:00 kubernetes-upgrade-108000 cri-dockerd[3657]: time="2024-02-13T23:49:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/652f5154836cd1c0b675a5ec257bc9199c7abadd7645db871f9119a493b8c7d9/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 13 23:49:00 kubernetes-upgrade-108000 cri-dockerd[3657]: W0213 23:49:00.132511    3657 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Feb 13 23:49:00 kubernetes-upgrade-108000 cri-dockerd[3657]: time="2024-02-13T23:49:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9da557b3e89afe5580b32da482252ac3958f4bb6179841558db9e0592c06d876/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 13 23:49:00 kubernetes-upgrade-108000 cri-dockerd[3657]: W0213 23:49:00.210430    3657 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Feb 13 23:49:00 kubernetes-upgrade-108000 cri-dockerd[3657]: time="2024-02-13T23:49:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/37e66f76a88e84751c1aff4192dc9facd6b662c916f6c18496f5ae664282c616/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 13 23:49:00 kubernetes-upgrade-108000 cri-dockerd[3657]: W0213 23:49:00.213429    3657 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Feb 13 23:49:00 kubernetes-upgrade-108000 cri-dockerd[3657]: time="2024-02-13T23:49:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/594513550f357f7e384ea77f7dcaf6b758817b753851b19f1c856c25012f8ec5/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 13 23:49:00 kubernetes-upgrade-108000 cri-dockerd[3657]: W0213 23:49:00.215581    3657 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ec867e6528387       d4e01cdf63970       8 seconds ago       Running             kube-controller-manager   2                   652f5154836cd       kube-controller-manager-kubernetes-upgrade-108000
	c9a57f9fac6e2       a0eed15eed449       8 seconds ago       Running             etcd                      2                   9da557b3e89af       etcd-kubernetes-upgrade-108000
	98527de8efc70       4270645ed6b7a       8 seconds ago       Running             kube-scheduler            2                   37e66f76a88e8       kube-scheduler-kubernetes-upgrade-108000
	df4aa5ff2ac17       bbb47a0f83324       8 seconds ago       Running             kube-apiserver            2                   594513550f357       kube-apiserver-kubernetes-upgrade-108000
	62a004cf27b1d       bbb47a0f83324       15 seconds ago      Exited              kube-apiserver            1                   ef7e0400d4a7c       kube-apiserver-kubernetes-upgrade-108000
	5d0bc442024dd       4270645ed6b7a       15 seconds ago      Exited              kube-scheduler            1                   ccfdf2f81a8f4       kube-scheduler-kubernetes-upgrade-108000
	7c0ab1301d1f6       a0eed15eed449       15 seconds ago      Exited              etcd                      1                   420670df85fa0       etcd-kubernetes-upgrade-108000
	f1d764758598e       d4e01cdf63970       15 seconds ago      Exited              kube-controller-manager   1                   b254c3cf23755       kube-controller-manager-kubernetes-upgrade-108000
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-108000
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-108000
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 13 Feb 2024 23:48:25 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-108000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 13 Feb 2024 23:49:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 13 Feb 2024 23:49:05 +0000   Tue, 13 Feb 2024 23:48:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 13 Feb 2024 23:49:05 +0000   Tue, 13 Feb 2024 23:48:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 13 Feb 2024 23:49:05 +0000   Tue, 13 Feb 2024 23:48:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 13 Feb 2024 23:49:05 +0000   Tue, 13 Feb 2024 23:48:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    kubernetes-upgrade-108000
	Capacity:
	  cpu:                12
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6067668Ki
	  pods:               110
	Allocatable:
	  cpu:                12
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6067668Ki
	  pods:               110
	System Info:
	  Machine ID:                 80746e0a29a643888b2af01e612c46e7
	  System UUID:                80746e0a29a643888b2af01e612c46e7
	  Boot ID:                    eafff5ab-67ad-478e-9471-32de0553af9c
	  Kernel Version:             6.6.12-linuxkit
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-108000                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         38s
	  kube-system                 kube-apiserver-kubernetes-upgrade-108000             250m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-108000    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  kube-system                 kube-scheduler-kubernetes-upgrade-108000             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (5%!)(MISSING)   0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 47s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  47s (x8 over 47s)  kubelet  Node kubernetes-upgrade-108000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    47s (x8 over 47s)  kubelet  Node kubernetes-upgrade-108000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     47s (x7 over 47s)  kubelet  Node kubernetes-upgrade-108000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  47s                kubelet  Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	
	
	==> etcd [7c0ab1301d1f] <==
	{"level":"info","ts":"2024-02-13T23:48:55.087924Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-02-13T23:48:56.403425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-13T23:48:56.403457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-13T23:48:56.403475Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-02-13T23:48:56.403485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2024-02-13T23:48:56.40349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2024-02-13T23:48:56.403496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2024-02-13T23:48:56.403501Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2024-02-13T23:48:56.405745Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:kubernetes-upgrade-108000 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-13T23:48:56.405784Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-13T23:48:56.405848Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-13T23:48:56.406386Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-13T23:48:56.40642Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-13T23:48:56.411193Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-13T23:48:56.411193Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2024-02-13T23:48:59.215404Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-02-13T23:48:59.215535Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"kubernetes-upgrade-108000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	{"level":"warn","ts":"2024-02-13T23:48:59.215636Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-13T23:48:59.215805Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-13T23:48:59.226254Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-13T23:48:59.226387Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-02-13T23:48:59.226502Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2024-02-13T23:48:59.287381Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-02-13T23:48:59.287705Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-02-13T23:48:59.287742Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"kubernetes-upgrade-108000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	
	==> etcd [c9a57f9fac6e] <==
	{"level":"info","ts":"2024-02-13T23:49:04.221263Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-13T23:49:04.22354Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-13T23:49:04.22362Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-13T23:49:04.227939Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2024-02-13T23:49:04.228265Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-13T23:49:05.63026Z","caller":"traceutil/trace.go:171","msg":"trace[2105232673] linearizableReadLoop","detail":"{readStateIndex:314; appliedIndex:313; }","duration":"331.548881ms","start":"2024-02-13T23:49:05.298697Z","end":"2024-02-13T23:49:05.630246Z","steps":["trace[2105232673] 'read index received'  (duration: 330.93831ms)","trace[2105232673] 'applied index is now lower than readState.Index'  (duration: 610.046µs)"],"step_count":2}
	{"level":"info","ts":"2024-02-13T23:49:05.630449Z","caller":"traceutil/trace.go:171","msg":"trace[1745001788] transaction","detail":"{read_only:false; response_revision:301; number_of_response:1; }","duration":"332.671474ms","start":"2024-02-13T23:49:05.29766Z","end":"2024-02-13T23:49:05.630332Z","steps":["trace[1745001788] 'process raft request'  (duration: 331.944032ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-13T23:49:05.630468Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"331.768564ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/kubernetes-upgrade-108000\" ","response":"range_response_count:1 size:685"}
	{"level":"info","ts":"2024-02-13T23:49:05.630924Z","caller":"traceutil/trace.go:171","msg":"trace[394259776] range","detail":"{range_begin:/registry/csinodes/kubernetes-upgrade-108000; range_end:; response_count:1; response_revision:301; }","duration":"332.036997ms","start":"2024-02-13T23:49:05.298679Z","end":"2024-02-13T23:49:05.630716Z","steps":["trace[394259776] 'agreement among raft nodes before linearized reading'  (duration: 331.737738ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-13T23:49:05.631Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-13T23:49:05.298672Z","time spent":"332.319185ms","remote":"127.0.0.1:46050","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":1,"response size":709,"request content":"key:\"/registry/csinodes/kubernetes-upgrade-108000\" "}
	{"level":"warn","ts":"2024-02-13T23:49:05.632348Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-13T23:49:05.297648Z","time spent":"332.836486ms","remote":"127.0.0.1:45910","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":690,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-fq3ekhvphhfmmpive3s3jmquny\" mod_revision:292 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-fq3ekhvphhfmmpive3s3jmquny\" value_size:617 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-fq3ekhvphhfmmpive3s3jmquny\" > >"}
	{"level":"info","ts":"2024-02-13T23:49:05.854213Z","caller":"traceutil/trace.go:171","msg":"trace[900718492] linearizableReadLoop","detail":"{readStateIndex:316; appliedIndex:314; }","duration":"223.8782ms","start":"2024-02-13T23:49:05.630326Z","end":"2024-02-13T23:49:05.854204Z","steps":["trace[900718492] 'read index received'  (duration: 223.279057ms)","trace[900718492] 'applied index is now lower than readState.Index'  (duration: 598.551µs)"],"step_count":2}
	{"level":"info","ts":"2024-02-13T23:49:05.854309Z","caller":"traceutil/trace.go:171","msg":"trace[666187935] transaction","detail":"{read_only:false; response_revision:302; number_of_response:1; }","duration":"555.451054ms","start":"2024-02-13T23:49:05.298847Z","end":"2024-02-13T23:49:05.854298Z","steps":["trace[666187935] 'process raft request'  (duration: 554.844753ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-13T23:49:05.854503Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"506.002319ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.67.2\" ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2024-02-13T23:49:05.854524Z","caller":"traceutil/trace.go:171","msg":"trace[635261550] range","detail":"{range_begin:/registry/masterleases/192.168.67.2; range_end:; response_count:1; response_revision:302; }","duration":"506.028202ms","start":"2024-02-13T23:49:05.348491Z","end":"2024-02-13T23:49:05.854519Z","steps":["trace[635261550] 'agreement among raft nodes before linearized reading'  (duration: 505.952571ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-13T23:49:05.854522Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-13T23:49:05.29884Z","time spent":"555.607829ms","remote":"127.0.0.1:45910","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":589,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/kubernetes-upgrade-108000\" mod_revision:289 > success:<request_put:<key:\"/registry/leases/kube-node-lease/kubernetes-upgrade-108000\" value_size:523 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/kubernetes-upgrade-108000\" > >"}
	{"level":"warn","ts":"2024-02-13T23:49:05.854537Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-13T23:49:05.34846Z","time spent":"506.073942ms","remote":"127.0.0.1:45654","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":1,"response size":155,"request content":"key:\"/registry/masterleases/192.168.67.2\" "}
	{"level":"warn","ts":"2024-02-13T23:49:05.855726Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"556.76948ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" ","response":"range_response_count:1 size:2936"}
	{"level":"info","ts":"2024-02-13T23:49:05.856322Z","caller":"traceutil/trace.go:171","msg":"trace[1032545959] range","detail":"{range_begin:/registry/configmaps/kube-system/extension-apiserver-authentication; range_end:; response_count:1; response_revision:302; }","duration":"557.364322ms","start":"2024-02-13T23:49:05.29895Z","end":"2024-02-13T23:49:05.856314Z","steps":["trace[1032545959] 'agreement among raft nodes before linearized reading'  (duration: 556.489017ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-13T23:49:05.856359Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-13T23:49:05.298945Z","time spent":"557.402467ms","remote":"127.0.0.1:45726","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":1,"response size":2960,"request content":"key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" "}
	{"level":"info","ts":"2024-02-13T23:49:05.856721Z","caller":"traceutil/trace.go:171","msg":"trace[1337643756] transaction","detail":"{read_only:false; number_of_response:0; response_revision:302; }","duration":"462.473003ms","start":"2024-02-13T23:49:05.393058Z","end":"2024-02-13T23:49:05.855531Z","steps":["trace[1337643756] 'process raft request'  (duration: 460.839271ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-13T23:49:05.857726Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-13T23:49:05.393046Z","time spent":"463.717229ms","remote":"127.0.0.1:45810","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":29,"request content":"compare:<target:MOD key:\"/registry/minions/kubernetes-upgrade-108000\" mod_revision:0 > success:<request_put:<key:\"/registry/minions/kubernetes-upgrade-108000\" value_size:3786 >> failure:<>"}
	{"level":"warn","ts":"2024-02-13T23:49:05.858592Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"314.202184ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-13T23:49:05.858631Z","caller":"traceutil/trace.go:171","msg":"trace[1226515424] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:302; }","duration":"314.244277ms","start":"2024-02-13T23:49:05.544376Z","end":"2024-02-13T23:49:05.85862Z","steps":["trace[1226515424] 'agreement among raft nodes before linearized reading'  (duration: 311.181441ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-13T23:49:05.858662Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-13T23:49:05.544355Z","time spent":"314.299292ms","remote":"127.0.0.1:45610","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	
	
	==> kernel <==
	 23:49:10 up  1:12,  0 users,  load average: 5.46, 4.79, 4.38
	Linux kubernetes-upgrade-108000 6.6.12-linuxkit #1 SMP PREEMPT_DYNAMIC Tue Jan 30 09:48:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kube-apiserver [62a004cf27b1] <==
	W0213 23:48:59.221372       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0213 23:48:59.221400       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0213 23:48:59.221442       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0213 23:48:59.221469       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0213 23:48:59.221485       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0213 23:48:59.221488       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0213 23:48:59.221518       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0213 23:48:59.221533       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0213 23:48:59.221536       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0213 23:48:59.221559       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0213 23:48:59.221611       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0213 23:48:59.221672       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0213 23:48:59.221711       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0213 23:48:59.221754       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0213 23:48:59.221837       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0213 23:48:59.221841       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0213 23:48:59.221891       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0213 23:48:59.221942       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0213 23:48:59.221942       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0213 23:48:59.221957       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0213 23:48:59.222142       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0213 23:48:59.222488       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0213 23:48:59.222525       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0213 23:48:59.222570       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0213 23:48:59.222671       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [df4aa5ff2ac1] <==
	I0213 23:49:05.342259       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0213 23:49:05.342261       1 aggregator.go:165] initial CRD sync complete...
	I0213 23:49:05.342266       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0213 23:49:05.342269       1 autoregister_controller.go:141] Starting autoregister controller
	I0213 23:49:05.342273       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0213 23:49:05.342281       1 cache.go:39] Caches are synced for autoregister controller
	I0213 23:49:05.342383       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0213 23:49:05.346090       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0213 23:49:05.346149       1 shared_informer.go:318] Caches are synced for configmaps
	I0213 23:49:05.856964       1 trace.go:236] Trace[31488353]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:4cdf9b85-1a68-45dd-bc56-94e962dcb41b,client:192.168.67.2,api-group:coordination.k8s.io,api-version:v1,name:kubernetes-upgrade-108000,subresource:,namespace:kube-node-lease,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-108000,user-agent:kubelet/v1.29.0 (linux/amd64) kubernetes/e4636d0,verb:PUT (13-Feb-2024 23:49:05.297) (total time: 559ms):
	Trace[31488353]: ["GuaranteedUpdate etcd3" audit-id:4cdf9b85-1a68-45dd-bc56-94e962dcb41b,key:/leases/kube-node-lease/kubernetes-upgrade-108000,type:*coordination.Lease,resource:leases.coordination.k8s.io 559ms (23:49:05.297)
	Trace[31488353]:  ---"Txn call completed" 558ms (23:49:05.856)]
	Trace[31488353]: [559.237529ms] [559.237529ms] END
	I0213 23:49:05.858439       1 trace.go:236] Trace[921490705]: "Get" accept:application/json, */*,audit-id:4469a60a-d6b5-4b62-9ffa-4b7acf721e2f,client:192.168.67.2,api-group:,api-version:v1,name:extension-apiserver-authentication,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:configmaps,scope:resource,url:/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication,user-agent:kube-scheduler/v1.29.0 (linux/amd64) kubernetes/e4636d0,verb:GET (13-Feb-2024 23:49:05.298) (total time: 559ms):
	Trace[921490705]: ---"About to write a response" 559ms (23:49:05.858)
	Trace[921490705]: [559.669503ms] [559.669503ms] END
	I0213 23:49:05.861546       1 trace.go:236] Trace[274456153]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:a569bbf7-b9b4-48e4-aeae-91ae11f5d22d,client:192.168.67.2,api-group:,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes,user-agent:kubelet/v1.29.0 (linux/amd64) kubernetes/e4636d0,verb:POST (13-Feb-2024 23:49:05.290) (total time: 570ms):
	Trace[274456153]: [570.584294ms] [570.584294ms] END
	E0213 23:49:05.867971       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0213 23:49:06.245017       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0213 23:49:06.802813       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0213 23:49:06.812309       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0213 23:49:06.848846       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0213 23:49:06.869161       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0213 23:49:06.874084       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [ec867e652838] <==
	I0213 23:49:04.087266       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0213 23:49:04.087344       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0213 23:49:04.087420       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0213 23:49:07.258833       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0213 23:49:07.258871       1 controllermanager.go:735] "Started controller" controller="serviceaccount-token-controller"
	I0213 23:49:07.275428       1 controllermanager.go:735] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0213 23:49:07.275664       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0213 23:49:07.275712       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0213 23:49:07.288681       1 controllermanager.go:735] "Started controller" controller="pod-garbage-collector-controller"
	I0213 23:49:07.288889       1 gc_controller.go:101] "Starting GC controller"
	I0213 23:49:07.288898       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0213 23:49:07.298677       1 controllermanager.go:735] "Started controller" controller="deployment-controller"
	I0213 23:49:07.298743       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0213 23:49:07.298757       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0213 23:49:07.303364       1 controllermanager.go:735] "Started controller" controller="replicaset-controller"
	I0213 23:49:07.303700       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0213 23:49:07.303756       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0213 23:49:07.310398       1 controllermanager.go:735] "Started controller" controller="disruption-controller"
	I0213 23:49:07.310652       1 disruption.go:433] "Sending events to api server."
	I0213 23:49:07.310694       1 disruption.go:444] "Starting disruption controller"
	I0213 23:49:07.310705       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0213 23:49:07.319959       1 controllermanager.go:735] "Started controller" controller="cronjob-controller"
	I0213 23:49:07.320129       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0213 23:49:07.320144       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0213 23:49:07.359126       1 shared_informer.go:318] Caches are synced for tokens
	
	
	==> kube-controller-manager [f1d764758598] <==
	I0213 23:48:55.534324       1 serving.go:380] Generated self-signed cert in-memory
	I0213 23:48:55.976423       1 controllermanager.go:187] "Starting" version="v1.29.0-rc.2"
	I0213 23:48:55.976500       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0213 23:48:55.977990       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0213 23:48:55.978143       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0213 23:48:55.978220       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0213 23:48:55.978260       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-scheduler [5d0bc442024d] <==
	I0213 23:48:55.701613       1 serving.go:380] Generated self-signed cert in-memory
	W0213 23:48:57.800849       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0213 23:48:57.800878       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0213 23:48:57.800889       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0213 23:48:57.800896       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0213 23:48:57.895599       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0213 23:48:57.895660       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0213 23:48:57.897606       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0213 23:48:57.897698       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0213 23:48:57.897710       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0213 23:48:57.897719       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0213 23:48:57.997922       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0213 23:48:59.216379       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0213 23:48:59.216434       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0213 23:48:59.216561       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0213 23:48:59.216799       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [98527de8efc7] <==
	I0213 23:49:03.304734       1 serving.go:380] Generated self-signed cert in-memory
	I0213 23:49:05.876159       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0213 23:49:05.876256       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0213 23:49:05.881061       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0213 23:49:05.881689       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0213 23:49:05.883481       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0213 23:49:05.883508       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0213 23:49:05.883547       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0213 23:49:05.883609       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0213 23:49:05.883655       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0213 23:49:05.881797       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0213 23:49:05.985117       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0213 23:49:05.985813       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0213 23:49:05.987778       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	
	
	==> kubelet <==
	Feb 13 23:49:01 kubernetes-upgrade-108000 kubelet[4900]: I0213 23:49:01.696589    4900 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70294355c58929c7b613607374704231-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-108000\" (UID: \"70294355c58929c7b613607374704231\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-108000"
	Feb 13 23:49:01 kubernetes-upgrade-108000 kubelet[4900]: I0213 23:49:01.696697    4900 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3144ac96ef52f302e21c70f6c7d64ac7-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-108000\" (UID: \"3144ac96ef52f302e21c70f6c7d64ac7\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-108000"
	Feb 13 23:49:01 kubernetes-upgrade-108000 kubelet[4900]: I0213 23:49:01.696727    4900 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/920dc9fc1f56df113efcf80b0477d9f1-etcd-certs\") pod \"etcd-kubernetes-upgrade-108000\" (UID: \"920dc9fc1f56df113efcf80b0477d9f1\") " pod="kube-system/etcd-kubernetes-upgrade-108000"
	Feb 13 23:49:01 kubernetes-upgrade-108000 kubelet[4900]: E0213 23:49:01.696733    4900 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-108000?timeout=10s\": dial tcp 192.168.67.2:8443: connect: connection refused" interval="400ms"
	Feb 13 23:49:01 kubernetes-upgrade-108000 kubelet[4900]: I0213 23:49:01.696759    4900 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/066832152237a43a834a034d99ef7023-usr-local-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-108000\" (UID: \"066832152237a43a834a034d99ef7023\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-108000"
	Feb 13 23:49:01 kubernetes-upgrade-108000 kubelet[4900]: I0213 23:49:01.696775    4900 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70294355c58929c7b613607374704231-etc-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-108000\" (UID: \"70294355c58929c7b613607374704231\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-108000"
	Feb 13 23:49:01 kubernetes-upgrade-108000 kubelet[4900]: I0213 23:49:01.696799    4900 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/920dc9fc1f56df113efcf80b0477d9f1-etcd-data\") pod \"etcd-kubernetes-upgrade-108000\" (UID: \"920dc9fc1f56df113efcf80b0477d9f1\") " pod="kube-system/etcd-kubernetes-upgrade-108000"
	Feb 13 23:49:01 kubernetes-upgrade-108000 kubelet[4900]: I0213 23:49:01.696812    4900 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/066832152237a43a834a034d99ef7023-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-108000\" (UID: \"066832152237a43a834a034d99ef7023\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-108000"
	Feb 13 23:49:01 kubernetes-upgrade-108000 kubelet[4900]: I0213 23:49:01.696826    4900 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/066832152237a43a834a034d99ef7023-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-108000\" (UID: \"066832152237a43a834a034d99ef7023\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-108000"
	Feb 13 23:49:01 kubernetes-upgrade-108000 kubelet[4900]: E0213 23:49:01.733378    4900 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.67.2:8443: connect: connection refused" event="&Event{ObjectMeta:{kubernetes-upgrade-108000.17b39106a8df4e28  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:kubernetes-upgrade-108000,UID:kubernetes-upgrade-108000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-108000,},FirstTimestamp:2024-02-13 23:49:01.492588072 +0000 UTC m=+0.145225339,LastTimestamp:2024-02-13 23:49:01.492588072 +0000 UTC m=+0.145225339,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:kubernetes-upgrade-1080
00,}"
	Feb 13 23:49:01 kubernetes-upgrade-108000 kubelet[4900]: I0213 23:49:01.811425    4900 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-108000"
	Feb 13 23:49:01 kubernetes-upgrade-108000 kubelet[4900]: E0213 23:49:01.811744    4900 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.67.2:8443: connect: connection refused" node="kubernetes-upgrade-108000"
	Feb 13 23:49:01 kubernetes-upgrade-108000 kubelet[4900]: I0213 23:49:01.959450    4900 scope.go:117] "RemoveContainer" containerID="62a004cf27b1d54cd7c9f5ad3350304c4b44ac8ebe8e3e601a970321bec7def5"
	Feb 13 23:49:01 kubernetes-upgrade-108000 kubelet[4900]: I0213 23:49:01.967491    4900 scope.go:117] "RemoveContainer" containerID="5d0bc442024dd647175e20c3a4d5eeedfcc4c3327e7e626b43d81324007b38c3"
	Feb 13 23:49:01 kubernetes-upgrade-108000 kubelet[4900]: I0213 23:49:01.975206    4900 scope.go:117] "RemoveContainer" containerID="7c0ab1301d1f6bc4249ed0804f59e0cbe27683429c04937b7ee331e0b8072eb3"
	Feb 13 23:49:01 kubernetes-upgrade-108000 kubelet[4900]: I0213 23:49:01.992422    4900 scope.go:117] "RemoveContainer" containerID="f1d764758598e496fb89dec87fb0a94d8b3917fdbbbca4add05f86d5775adf0f"
	Feb 13 23:49:02 kubernetes-upgrade-108000 kubelet[4900]: E0213 23:49:02.097726    4900 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-108000?timeout=10s\": dial tcp 192.168.67.2:8443: connect: connection refused" interval="800ms"
	Feb 13 23:49:02 kubernetes-upgrade-108000 kubelet[4900]: I0213 23:49:02.221327    4900 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-108000"
	Feb 13 23:49:02 kubernetes-upgrade-108000 kubelet[4900]: E0213 23:49:02.221566    4900 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.67.2:8443: connect: connection refused" node="kubernetes-upgrade-108000"
	Feb 13 23:49:03 kubernetes-upgrade-108000 kubelet[4900]: I0213 23:49:03.031893    4900 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-108000"
	Feb 13 23:49:05 kubernetes-upgrade-108000 kubelet[4900]: I0213 23:49:05.459159    4900 apiserver.go:52] "Watching apiserver"
	Feb 13 23:49:05 kubernetes-upgrade-108000 kubelet[4900]: I0213 23:49:05.496492    4900 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Feb 13 23:49:05 kubernetes-upgrade-108000 kubelet[4900]: I0213 23:49:05.867853    4900 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-108000"
	Feb 13 23:49:05 kubernetes-upgrade-108000 kubelet[4900]: I0213 23:49:05.868070    4900 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-108000"
	Feb 13 23:49:05 kubernetes-upgrade-108000 kubelet[4900]: E0213 23:49:05.868309    4900 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-108000\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-108000"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-108000 -n kubernetes-upgrade-108000
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-108000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-108000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-108000 describe pod storage-provisioner: exit status 1 (62.834335ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-108000 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-108000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-108000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-108000: (2.598485495s)
--- FAIL: TestKubernetesUpgrade (332.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (259.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-745000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0213 15:56:56.117672    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/custom-flannel-208000/client.crt: no such file or directory
E0213 15:57:06.357737    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/custom-flannel-208000/client.crt: no such file or directory
E0213 15:57:09.209843    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/calico-208000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-745000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m19.180823318s)

                                                
                                                
-- stdout --
	* [old-k8s-version-745000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18169
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18169-6320/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18169-6320/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-745000 in cluster old-k8s-version-745000
	* Pulling base image v0.0.42-1704759386-17866 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:56:51.498081   22261 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:56:51.498272   22261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:56:51.498279   22261 out.go:304] Setting ErrFile to fd 2...
	I0213 15:56:51.498283   22261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:56:51.498534   22261 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18169-6320/.minikube/bin
	I0213 15:56:51.500267   22261 out.go:298] Setting JSON to false
	I0213 15:56:51.523503   22261 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5471,"bootTime":1707863140,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0213 15:56:51.523615   22261 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:56:51.545670   22261 out.go:177] * [old-k8s-version-745000] minikube v1.32.0 on Darwin 14.3.1
	I0213 15:56:51.588563   22261 out.go:177]   - MINIKUBE_LOCATION=18169
	I0213 15:56:51.588632   22261 notify.go:220] Checking for updates...
	I0213 15:56:51.631418   22261 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18169-6320/kubeconfig
	I0213 15:56:51.652526   22261 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0213 15:56:51.674368   22261 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:56:51.695394   22261 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18169-6320/.minikube
	I0213 15:56:51.716357   22261 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:56:51.737960   22261 config.go:182] Loaded profile config "bridge-208000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:56:51.738138   22261 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:56:51.795875   22261 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0213 15:56:51.796044   22261 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 15:56:51.901668   22261 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-13 23:56:51.891949316 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 15:56:51.944137   22261 out.go:177] * Using the docker driver based on user configuration
	I0213 15:56:51.967030   22261 start.go:298] selected driver: docker
	I0213 15:56:51.967045   22261 start.go:902] validating driver "docker" against <nil>
	I0213 15:56:51.967055   22261 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:56:51.970316   22261 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 15:56:52.076663   22261 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-13 23:56:52.065582055 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 15:56:52.077004   22261 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 15:56:52.077412   22261 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 15:56:52.099178   22261 out.go:177] * Using Docker Desktop driver with root privileges
	I0213 15:56:52.120919   22261 cni.go:84] Creating CNI manager for ""
	I0213 15:56:52.120951   22261 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0213 15:56:52.120963   22261 start_flags.go:321] config:
	{Name:old-k8s-version-745000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-745000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:56:52.142646   22261 out.go:177] * Starting control plane node old-k8s-version-745000 in cluster old-k8s-version-745000
	I0213 15:56:52.184889   22261 cache.go:121] Beginning downloading kic base image for docker with docker
	I0213 15:56:52.226819   22261 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0213 15:56:52.247786   22261 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0213 15:56:52.247805   22261 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0213 15:56:52.247824   22261 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0213 15:56:52.247841   22261 cache.go:56] Caching tarball of preloaded images
	I0213 15:56:52.247941   22261 preload.go:174] Found /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0213 15:56:52.247951   22261 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0213 15:56:52.248503   22261 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/config.json ...
	I0213 15:56:52.248598   22261 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/config.json: {Name:mk66ebef5bde6debcea5a2640297c9a4965b1ec3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:56:52.298339   22261 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0213 15:56:52.298358   22261 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0213 15:56:52.298378   22261 cache.go:194] Successfully downloaded all kic artifacts
	I0213 15:56:52.298430   22261 start.go:365] acquiring machines lock for old-k8s-version-745000: {Name:mkd7f9273d4ef06a0c4934b33030a9cfbc88fa9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:56:52.298589   22261 start.go:369] acquired machines lock for "old-k8s-version-745000" in 145.639µs
	I0213 15:56:52.298617   22261 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-745000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-745000 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:56:52.298948   22261 start.go:125] createHost starting for "" (driver="docker")
	I0213 15:56:52.320758   22261 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0213 15:56:52.321220   22261 start.go:159] libmachine.API.Create for "old-k8s-version-745000" (driver="docker")
	I0213 15:56:52.321272   22261 client.go:168] LocalClient.Create starting
	I0213 15:56:52.321496   22261 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem
	I0213 15:56:52.321587   22261 main.go:141] libmachine: Decoding PEM data...
	I0213 15:56:52.321613   22261 main.go:141] libmachine: Parsing certificate...
	I0213 15:56:52.321697   22261 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem
	I0213 15:56:52.321754   22261 main.go:141] libmachine: Decoding PEM data...
	I0213 15:56:52.321766   22261 main.go:141] libmachine: Parsing certificate...
	I0213 15:56:52.342472   22261 cli_runner.go:164] Run: docker network inspect old-k8s-version-745000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0213 15:56:52.394660   22261 cli_runner.go:211] docker network inspect old-k8s-version-745000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0213 15:56:52.394763   22261 network_create.go:281] running [docker network inspect old-k8s-version-745000] to gather additional debugging logs...
	I0213 15:56:52.394780   22261 cli_runner.go:164] Run: docker network inspect old-k8s-version-745000
	W0213 15:56:52.447572   22261 cli_runner.go:211] docker network inspect old-k8s-version-745000 returned with exit code 1
	I0213 15:56:52.447602   22261 network_create.go:284] error running [docker network inspect old-k8s-version-745000]: docker network inspect old-k8s-version-745000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-745000 not found
	I0213 15:56:52.447612   22261 network_create.go:286] output of [docker network inspect old-k8s-version-745000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-745000 not found
	
	** /stderr **
	I0213 15:56:52.447752   22261 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0213 15:56:52.502064   22261 network.go:210] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0213 15:56:52.502479   22261 network.go:207] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002146f90}
	I0213 15:56:52.502496   22261 network_create.go:124] attempt to create docker network old-k8s-version-745000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I0213 15:56:52.502559   22261 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-745000 old-k8s-version-745000
	W0213 15:56:52.554342   22261 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-745000 old-k8s-version-745000 returned with exit code 1
	W0213 15:56:52.554407   22261 network_create.go:149] failed to create docker network old-k8s-version-745000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-745000 old-k8s-version-745000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0213 15:56:52.554429   22261 network_create.go:116] failed to create docker network old-k8s-version-745000 192.168.58.0/24, will retry: subnet is taken
	I0213 15:56:52.555831   22261 network.go:210] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0213 15:56:52.556184   22261 network.go:207] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00239d570}
	I0213 15:56:52.556199   22261 network_create.go:124] attempt to create docker network old-k8s-version-745000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0213 15:56:52.556267   22261 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-745000 old-k8s-version-745000
	I0213 15:56:52.644995   22261 network_create.go:108] docker network old-k8s-version-745000 192.168.67.0/24 created
	I0213 15:56:52.645043   22261 kic.go:121] calculated static IP "192.168.67.2" for the "old-k8s-version-745000" container
	I0213 15:56:52.645152   22261 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0213 15:56:52.696629   22261 cli_runner.go:164] Run: docker volume create old-k8s-version-745000 --label name.minikube.sigs.k8s.io=old-k8s-version-745000 --label created_by.minikube.sigs.k8s.io=true
	I0213 15:56:52.748320   22261 oci.go:103] Successfully created a docker volume old-k8s-version-745000
	I0213 15:56:52.748433   22261 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-745000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-745000 --entrypoint /usr/bin/test -v old-k8s-version-745000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0213 15:56:53.370068   22261 oci.go:107] Successfully prepared a docker volume old-k8s-version-745000
	I0213 15:56:53.370105   22261 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0213 15:56:53.370120   22261 kic.go:194] Starting extracting preloaded images to volume ...
	I0213 15:56:53.370217   22261 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-745000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0213 15:56:55.694906   22261 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-745000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (2.324681886s)
	I0213 15:56:55.694936   22261 kic.go:203] duration metric: took 2.324868 seconds to extract preloaded images to volume
	I0213 15:56:55.695036   22261 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0213 15:56:55.814238   22261 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-745000 --name old-k8s-version-745000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-745000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-745000 --network old-k8s-version-745000 --ip 192.168.67.2 --volume old-k8s-version-745000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0213 15:56:56.245349   22261 cli_runner.go:164] Run: docker container inspect old-k8s-version-745000 --format={{.State.Running}}
	I0213 15:56:56.312042   22261 cli_runner.go:164] Run: docker container inspect old-k8s-version-745000 --format={{.State.Status}}
	I0213 15:56:56.375876   22261 cli_runner.go:164] Run: docker exec old-k8s-version-745000 stat /var/lib/dpkg/alternatives/iptables
	I0213 15:56:56.523853   22261 oci.go:144] the created container "old-k8s-version-745000" has a running status.
	I0213 15:56:56.523888   22261 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/old-k8s-version-745000/id_rsa...
	I0213 15:56:56.693061   22261 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/old-k8s-version-745000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0213 15:56:56.773945   22261 cli_runner.go:164] Run: docker container inspect old-k8s-version-745000 --format={{.State.Status}}
	I0213 15:56:56.840333   22261 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0213 15:56:56.840356   22261 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-745000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0213 15:56:56.960329   22261 cli_runner.go:164] Run: docker container inspect old-k8s-version-745000 --format={{.State.Status}}
	I0213 15:56:57.017692   22261 machine.go:88] provisioning docker machine ...
	I0213 15:56:57.017740   22261 ubuntu.go:169] provisioning hostname "old-k8s-version-745000"
	I0213 15:56:57.017840   22261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-745000
	I0213 15:56:57.072213   22261 main.go:141] libmachine: Using SSH client type: native
	I0213 15:56:57.072545   22261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 56464 <nil> <nil>}
	I0213 15:56:57.072560   22261 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-745000 && echo "old-k8s-version-745000" | sudo tee /etc/hostname
	I0213 15:56:57.241884   22261 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-745000
	
	I0213 15:56:57.241984   22261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-745000
	I0213 15:56:57.299540   22261 main.go:141] libmachine: Using SSH client type: native
	I0213 15:56:57.299862   22261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 56464 <nil> <nil>}
	I0213 15:56:57.299876   22261 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-745000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-745000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-745000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 15:56:57.447707   22261 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 15:56:57.447727   22261 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/18169-6320/.minikube CaCertPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18169-6320/.minikube}
	I0213 15:56:57.447745   22261 ubuntu.go:177] setting up certificates
	I0213 15:56:57.447753   22261 provision.go:83] configureAuth start
	I0213 15:56:57.447828   22261 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-745000
	I0213 15:56:57.507056   22261 provision.go:138] copyHostCerts
	I0213 15:56:57.507191   22261 exec_runner.go:144] found /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.pem, removing ...
	I0213 15:56:57.507207   22261 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.pem
	I0213 15:56:57.507360   22261 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.pem (1078 bytes)
	I0213 15:56:57.507606   22261 exec_runner.go:144] found /Users/jenkins/minikube-integration/18169-6320/.minikube/cert.pem, removing ...
	I0213 15:56:57.507614   22261 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18169-6320/.minikube/cert.pem
	I0213 15:56:57.507748   22261 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18169-6320/.minikube/cert.pem (1123 bytes)
	I0213 15:56:57.507943   22261 exec_runner.go:144] found /Users/jenkins/minikube-integration/18169-6320/.minikube/key.pem, removing ...
	I0213 15:56:57.507950   22261 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18169-6320/.minikube/key.pem
	I0213 15:56:57.508023   22261 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18169-6320/.minikube/key.pem (1675 bytes)
	I0213 15:56:57.508177   22261 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-745000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-745000]
	I0213 15:56:57.706994   22261 provision.go:172] copyRemoteCerts
	I0213 15:56:57.707055   22261 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 15:56:57.707119   22261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-745000
	I0213 15:56:57.766067   22261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56464 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/old-k8s-version-745000/id_rsa Username:docker}
	I0213 15:56:57.872309   22261 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 15:56:57.920029   22261 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0213 15:56:57.965962   22261 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0213 15:56:58.011191   22261 provision.go:86] duration metric: configureAuth took 563.437331ms
	I0213 15:56:58.011223   22261 ubuntu.go:193] setting minikube options for container-runtime
	I0213 15:56:58.011355   22261 config.go:182] Loaded profile config "old-k8s-version-745000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0213 15:56:58.011417   22261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-745000
	I0213 15:56:58.067204   22261 main.go:141] libmachine: Using SSH client type: native
	I0213 15:56:58.067516   22261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 56464 <nil> <nil>}
	I0213 15:56:58.067534   22261 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0213 15:56:58.207166   22261 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0213 15:56:58.207186   22261 ubuntu.go:71] root file system type: overlay
	I0213 15:56:58.207314   22261 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0213 15:56:58.207411   22261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-745000
	I0213 15:56:58.266883   22261 main.go:141] libmachine: Using SSH client type: native
	I0213 15:56:58.267231   22261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 56464 <nil> <nil>}
	I0213 15:56:58.267287   22261 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0213 15:56:58.432375   22261 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0213 15:56:58.432558   22261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-745000
	I0213 15:56:58.490858   22261 main.go:141] libmachine: Using SSH client type: native
	I0213 15:56:58.491162   22261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 56464 <nil> <nil>}
	I0213 15:56:58.491175   22261 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0213 15:56:59.176289   22261 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:22.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-13 23:56:58.427230787 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0213 15:56:59.176321   22261 machine.go:91] provisioned docker machine in 2.158651629s
	I0213 15:56:59.176331   22261 client.go:171] LocalClient.Create took 6.855200692s
	I0213 15:56:59.176352   22261 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-745000" took 6.85528307s
	I0213 15:56:59.176362   22261 start.go:300] post-start starting for "old-k8s-version-745000" (driver="docker")
	I0213 15:56:59.176370   22261 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 15:56:59.176452   22261 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 15:56:59.176544   22261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-745000
	I0213 15:56:59.234818   22261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56464 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/old-k8s-version-745000/id_rsa Username:docker}
	I0213 15:56:59.341567   22261 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 15:56:59.346795   22261 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0213 15:56:59.346834   22261 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0213 15:56:59.346842   22261 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0213 15:56:59.346848   22261 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0213 15:56:59.346858   22261 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18169-6320/.minikube/addons for local assets ...
	I0213 15:56:59.346981   22261 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18169-6320/.minikube/files for local assets ...
	I0213 15:56:59.347191   22261 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem -> 67762.pem in /etc/ssl/certs
	I0213 15:56:59.347468   22261 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 15:56:59.363679   22261 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem --> /etc/ssl/certs/67762.pem (1708 bytes)
	I0213 15:56:59.404893   22261 start.go:303] post-start completed in 228.506449ms
	I0213 15:56:59.405663   22261 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-745000
	I0213 15:56:59.482076   22261 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/config.json ...
	I0213 15:56:59.482631   22261 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0213 15:56:59.482701   22261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-745000
	I0213 15:56:59.542539   22261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56464 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/old-k8s-version-745000/id_rsa Username:docker}
	I0213 15:56:59.636773   22261 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0213 15:56:59.642731   22261 start.go:128] duration metric: createHost completed in 7.343922909s
	I0213 15:56:59.642754   22261 start.go:83] releasing machines lock for "old-k8s-version-745000", held for 7.344314573s
	I0213 15:56:59.642855   22261 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-745000
	I0213 15:56:59.707081   22261 ssh_runner.go:195] Run: cat /version.json
	I0213 15:56:59.707080   22261 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 15:56:59.707216   22261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-745000
	I0213 15:56:59.707275   22261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-745000
	I0213 15:56:59.781754   22261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56464 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/old-k8s-version-745000/id_rsa Username:docker}
	I0213 15:56:59.781797   22261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56464 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/old-k8s-version-745000/id_rsa Username:docker}
	I0213 15:56:59.882012   22261 ssh_runner.go:195] Run: systemctl --version
	I0213 15:56:59.993840   22261 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0213 15:57:00.001077   22261 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0213 15:57:00.049324   22261 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0213 15:57:00.049412   22261 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0213 15:57:00.084158   22261 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0213 15:57:00.117394   22261 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 15:57:00.117412   22261 start.go:475] detecting cgroup driver to use...
	I0213 15:57:00.117424   22261 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 15:57:00.117517   22261 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 15:57:00.150453   22261 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0213 15:57:00.169437   22261 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0213 15:57:00.187255   22261 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0213 15:57:00.187340   22261 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0213 15:57:00.206407   22261 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 15:57:00.224843   22261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0213 15:57:00.243436   22261 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 15:57:00.263176   22261 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 15:57:00.280173   22261 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0213 15:57:00.297403   22261 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 15:57:00.313966   22261 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 15:57:00.330779   22261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 15:57:00.402695   22261 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0213 15:57:00.493908   22261 start.go:475] detecting cgroup driver to use...
	I0213 15:57:00.493928   22261 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 15:57:00.493993   22261 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0213 15:57:00.512890   22261 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0213 15:57:00.512960   22261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0213 15:57:00.533028   22261 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 15:57:00.571015   22261 ssh_runner.go:195] Run: which cri-dockerd
	I0213 15:57:00.575512   22261 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0213 15:57:00.591990   22261 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0213 15:57:00.627187   22261 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0213 15:57:00.700796   22261 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0213 15:57:00.808594   22261 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0213 15:57:00.808681   22261 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0213 15:57:00.846395   22261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 15:57:00.920920   22261 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 15:57:01.288726   22261 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 15:57:01.315299   22261 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 15:57:01.384809   22261 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	I0213 15:57:01.384946   22261 cli_runner.go:164] Run: docker exec -t old-k8s-version-745000 dig +short host.docker.internal
	I0213 15:57:01.525465   22261 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0213 15:57:01.525564   22261 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0213 15:57:01.530334   22261 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 15:57:01.547791   22261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-745000
	I0213 15:57:01.605569   22261 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0213 15:57:01.605652   22261 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 15:57:01.627262   22261 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0213 15:57:01.627295   22261 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0213 15:57:01.627383   22261 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0213 15:57:01.645386   22261 ssh_runner.go:195] Run: which lz4
	I0213 15:57:01.650371   22261 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0213 15:57:01.654739   22261 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 15:57:01.654764   22261 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0213 15:57:08.556470   22261 docker.go:649] Took 6.906283 seconds to copy over tarball
	I0213 15:57:08.556546   22261 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 15:57:10.361204   22261 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.804658418s)
	I0213 15:57:10.361234   22261 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 15:57:10.410977   22261 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0213 15:57:10.427813   22261 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0213 15:57:10.457791   22261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 15:57:10.526862   22261 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 15:57:11.702844   22261 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.175972459s)
	I0213 15:57:11.702984   22261 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 15:57:11.728437   22261 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0213 15:57:11.728450   22261 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0213 15:57:11.728460   22261 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0213 15:57:11.734181   22261 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0213 15:57:11.734487   22261 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 15:57:11.734827   22261 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 15:57:11.735489   22261 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0213 15:57:11.736032   22261 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0213 15:57:11.736053   22261 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0213 15:57:11.736127   22261 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0213 15:57:11.736266   22261 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0213 15:57:11.765383   22261 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 15:57:11.765442   22261 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 15:57:11.765481   22261 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0213 15:57:11.765644   22261 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0213 15:57:11.785360   22261 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0213 15:57:11.785386   22261 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0213 15:57:11.785471   22261 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0213 15:57:11.785467   22261 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0213 15:57:13.752680   22261 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0213 15:57:13.773412   22261 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0213 15:57:13.773448   22261 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0213 15:57:13.773504   22261 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0213 15:57:13.792770   22261 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0213 15:57:13.830207   22261 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0213 15:57:13.849634   22261 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0213 15:57:13.849660   22261 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0213 15:57:13.849738   22261 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0213 15:57:13.861605   22261 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0213 15:57:13.869464   22261 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0213 15:57:13.879357   22261 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0213 15:57:13.879384   22261 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0213 15:57:13.879444   22261 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0213 15:57:13.894515   22261 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0213 15:57:13.894946   22261 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0213 15:57:13.899212   22261 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0213 15:57:13.899222   22261 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0213 15:57:13.904581   22261 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 15:57:13.921454   22261 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0213 15:57:13.921490   22261 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0213 15:57:13.921492   22261 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0213 15:57:13.921518   22261 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0213 15:57:13.921571   22261 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0213 15:57:13.921613   22261 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0213 15:57:13.925576   22261 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0213 15:57:13.925621   22261 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0213 15:57:13.925751   22261 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0213 15:57:13.928963   22261 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0213 15:57:13.929002   22261 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 15:57:13.929087   22261 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 15:57:13.987876   22261 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0213 15:57:13.987884   22261 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0213 15:57:13.992576   22261 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0213 15:57:13.998856   22261 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0213 15:57:14.079557   22261 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 15:57:14.100017   22261 cache_images.go:92] LoadImages completed in 2.371596096s
	W0213 15:57:14.100073   22261 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0213 15:57:14.100153   22261 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0213 15:57:14.148405   22261 cni.go:84] Creating CNI manager for ""
	I0213 15:57:14.148423   22261 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0213 15:57:14.148436   22261 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 15:57:14.148473   22261 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-745000 NodeName:old-k8s-version-745000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0213 15:57:14.148566   22261 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-745000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-745000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 15:57:14.148620   22261 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-745000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-745000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 15:57:14.148682   22261 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0213 15:57:14.163661   22261 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 15:57:14.163729   22261 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 15:57:14.178700   22261 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0213 15:57:14.208911   22261 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 15:57:14.240768   22261 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0213 15:57:14.272688   22261 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0213 15:57:14.277601   22261 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 15:57:14.295950   22261 certs.go:56] Setting up /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000 for IP: 192.168.67.2
	I0213 15:57:14.295969   22261 certs.go:190] acquiring lock for shared ca certs: {Name:mkc037f48c69539d66bd92ede4890b05c28518b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:57:14.296148   22261 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.key
	I0213 15:57:14.296220   22261 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18169-6320/.minikube/proxy-client-ca.key
	I0213 15:57:14.296263   22261 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/client.key
	I0213 15:57:14.296274   22261 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/client.crt with IP's: []
	I0213 15:57:14.448899   22261 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/client.crt ...
	I0213 15:57:14.448911   22261 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/client.crt: {Name:mk68fdcfa92864c3ea08060bdee743ea3e245109 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:57:14.459727   22261 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/client.key ...
	I0213 15:57:14.459750   22261 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/client.key: {Name:mkf78ccc73ead0e4fef122c0c2bcb200766f4699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:57:14.482178   22261 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/apiserver.key.c7fa3a9e
	I0213 15:57:14.482221   22261 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0213 15:57:14.530577   22261 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/apiserver.crt.c7fa3a9e ...
	I0213 15:57:14.530588   22261 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/apiserver.crt.c7fa3a9e: {Name:mkd6a0313b6d700ea2153799ade7a3cf02b207f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:57:14.566670   22261 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/apiserver.key.c7fa3a9e ...
	I0213 15:57:14.566735   22261 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/apiserver.key.c7fa3a9e: {Name:mkbeeb2c27d0efc6401a3a7b66a03a47ab506bae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:57:14.587766   22261 certs.go:337] copying /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/apiserver.crt
	I0213 15:57:14.588151   22261 certs.go:341] copying /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/apiserver.key
	I0213 15:57:14.588447   22261 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/proxy-client.key
	I0213 15:57:14.588470   22261 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/proxy-client.crt with IP's: []
	I0213 15:57:14.678187   22261 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/proxy-client.crt ...
	I0213 15:57:14.678203   22261 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/proxy-client.crt: {Name:mk16e0d45474ebd3450e84601182e06e41fd5acc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:57:14.678553   22261 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/proxy-client.key ...
	I0213 15:57:14.678562   22261 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/proxy-client.key: {Name:mk6a3cb3bd4e33cd7fec6b15c08b7de342a942f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:57:14.679058   22261 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/6776.pem (1338 bytes)
	W0213 15:57:14.679110   22261 certs.go:433] ignoring /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/6776_empty.pem, impossibly tiny 0 bytes
	I0213 15:57:14.679120   22261 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 15:57:14.679205   22261 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem (1078 bytes)
	I0213 15:57:14.679314   22261 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem (1123 bytes)
	I0213 15:57:14.679371   22261 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/key.pem (1675 bytes)
	I0213 15:57:14.679459   22261 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem (1708 bytes)
	I0213 15:57:14.680023   22261 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 15:57:14.739626   22261 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0213 15:57:14.785017   22261 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 15:57:14.825438   22261 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0213 15:57:14.871148   22261 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 15:57:14.930388   22261 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0213 15:57:14.972706   22261 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 15:57:15.013443   22261 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 15:57:15.055463   22261 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/6776.pem --> /usr/share/ca-certificates/6776.pem (1338 bytes)
	I0213 15:57:15.096568   22261 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem --> /usr/share/ca-certificates/67762.pem (1708 bytes)
	I0213 15:57:15.148543   22261 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 15:57:15.195386   22261 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 15:57:15.230314   22261 ssh_runner.go:195] Run: openssl version
	I0213 15:57:15.236653   22261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 15:57:15.256363   22261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 15:57:15.261457   22261 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 22:54 /usr/share/ca-certificates/minikubeCA.pem
	I0213 15:57:15.261509   22261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 15:57:15.269789   22261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 15:57:15.287963   22261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6776.pem && ln -fs /usr/share/ca-certificates/6776.pem /etc/ssl/certs/6776.pem"
	I0213 15:57:15.306388   22261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6776.pem
	I0213 15:57:15.312624   22261 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 23:02 /usr/share/ca-certificates/6776.pem
	I0213 15:57:15.312676   22261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6776.pem
	I0213 15:57:15.320384   22261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6776.pem /etc/ssl/certs/51391683.0"
	I0213 15:57:15.339625   22261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67762.pem && ln -fs /usr/share/ca-certificates/67762.pem /etc/ssl/certs/67762.pem"
	I0213 15:57:15.358205   22261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67762.pem
	I0213 15:57:15.364006   22261 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 23:02 /usr/share/ca-certificates/67762.pem
	I0213 15:57:15.364081   22261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67762.pem
	I0213 15:57:15.372703   22261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67762.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 15:57:15.393503   22261 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 15:57:15.400240   22261 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0213 15:57:15.400296   22261 kubeadm.go:404] StartCluster: {Name:old-k8s-version-745000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-745000 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:57:15.400403   22261 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 15:57:15.422683   22261 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 15:57:15.440836   22261 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 15:57:15.458664   22261 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0213 15:57:15.458722   22261 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 15:57:15.477686   22261 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 15:57:15.477719   22261 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0213 15:57:15.584760   22261 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0213 15:57:15.584800   22261 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 15:57:15.868959   22261 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 15:57:15.869055   22261 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 15:57:15.869137   22261 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 15:57:16.056948   22261 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 15:57:16.057930   22261 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 15:57:16.065146   22261 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0213 15:57:16.152190   22261 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 15:57:16.239982   22261 out.go:204]   - Generating certificates and keys ...
	I0213 15:57:16.240077   22261 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 15:57:16.240164   22261 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 15:57:16.372426   22261 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0213 15:57:16.487364   22261 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0213 15:57:16.652842   22261 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0213 15:57:16.824376   22261 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0213 15:57:16.989096   22261 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0213 15:57:16.989379   22261 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-745000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0213 15:57:17.174766   22261 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0213 15:57:17.174931   22261 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-745000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0213 15:57:17.387314   22261 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0213 15:57:17.582444   22261 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0213 15:57:17.627734   22261 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0213 15:57:17.627976   22261 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 15:57:17.698045   22261 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 15:57:18.029144   22261 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 15:57:18.206083   22261 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 15:57:18.564199   22261 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 15:57:18.564880   22261 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 15:57:18.588831   22261 out.go:204]   - Booting up control plane ...
	I0213 15:57:18.588913   22261 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 15:57:18.589036   22261 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 15:57:18.589110   22261 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 15:57:18.589182   22261 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 15:57:18.589303   22261 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 15:57:58.575294   22261 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0213 15:57:58.576158   22261 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 15:57:58.576360   22261 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 15:58:03.577070   22261 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 15:58:03.577234   22261 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 15:58:13.578972   22261 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 15:58:13.579139   22261 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 15:58:33.579861   22261 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 15:58:33.580077   22261 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 15:59:13.580658   22261 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 15:59:13.580845   22261 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 15:59:13.580856   22261 kubeadm.go:322] 
	I0213 15:59:13.580887   22261 kubeadm.go:322] Unfortunately, an error has occurred:
	I0213 15:59:13.580926   22261 kubeadm.go:322] 	timed out waiting for the condition
	I0213 15:59:13.580939   22261 kubeadm.go:322] 
	I0213 15:59:13.580974   22261 kubeadm.go:322] This error is likely caused by:
	I0213 15:59:13.581007   22261 kubeadm.go:322] 	- The kubelet is not running
	I0213 15:59:13.581089   22261 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0213 15:59:13.581096   22261 kubeadm.go:322] 
	I0213 15:59:13.581176   22261 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0213 15:59:13.581213   22261 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0213 15:59:13.581244   22261 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0213 15:59:13.581251   22261 kubeadm.go:322] 
	I0213 15:59:13.581366   22261 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0213 15:59:13.581498   22261 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0213 15:59:13.581629   22261 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0213 15:59:13.581671   22261 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0213 15:59:13.581732   22261 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0213 15:59:13.581762   22261 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0213 15:59:13.586215   22261 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0213 15:59:13.586287   22261 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0213 15:59:13.586456   22261 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0213 15:59:13.586599   22261 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 15:59:13.586718   22261 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0213 15:59:13.586818   22261 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0213 15:59:13.586901   22261 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-745000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-745000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-745000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-745000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0213 15:59:13.586947   22261 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0213 15:59:14.068026   22261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 15:59:14.085129   22261 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0213 15:59:14.085185   22261 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 15:59:14.100223   22261 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 15:59:14.100249   22261 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0213 15:59:14.154700   22261 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0213 15:59:14.154767   22261 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 15:59:14.429273   22261 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 15:59:14.429370   22261 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 15:59:14.429451   22261 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 15:59:14.640801   22261 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 15:59:14.641728   22261 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 15:59:14.648561   22261 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0213 15:59:14.710170   22261 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 15:59:14.752480   22261 out.go:204]   - Generating certificates and keys ...
	I0213 15:59:14.752569   22261 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 15:59:14.752640   22261 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 15:59:14.752708   22261 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 15:59:14.752769   22261 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 15:59:14.752835   22261 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 15:59:14.752902   22261 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 15:59:14.752962   22261 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 15:59:14.753019   22261 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 15:59:14.753093   22261 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 15:59:14.753152   22261 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 15:59:14.753179   22261 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 15:59:14.753221   22261 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 15:59:14.770772   22261 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 15:59:14.858088   22261 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 15:59:14.929882   22261 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 15:59:15.071136   22261 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 15:59:15.072015   22261 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 15:59:15.093540   22261 out.go:204]   - Booting up control plane ...
	I0213 15:59:15.093651   22261 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 15:59:15.093726   22261 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 15:59:15.093788   22261 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 15:59:15.093849   22261 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 15:59:15.093982   22261 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 15:59:55.083845   22261 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0213 15:59:55.084752   22261 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 15:59:55.084936   22261 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 16:00:00.086724   22261 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 16:00:00.086922   22261 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 16:00:10.087458   22261 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 16:00:10.087640   22261 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 16:00:30.088592   22261 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 16:00:30.088788   22261 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 16:01:10.089568   22261 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 16:01:10.089783   22261 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 16:01:10.089802   22261 kubeadm.go:322] 
	I0213 16:01:10.089875   22261 kubeadm.go:322] Unfortunately, an error has occurred:
	I0213 16:01:10.089922   22261 kubeadm.go:322] 	timed out waiting for the condition
	I0213 16:01:10.089929   22261 kubeadm.go:322] 
	I0213 16:01:10.089957   22261 kubeadm.go:322] This error is likely caused by:
	I0213 16:01:10.089990   22261 kubeadm.go:322] 	- The kubelet is not running
	I0213 16:01:10.090071   22261 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0213 16:01:10.090078   22261 kubeadm.go:322] 
	I0213 16:01:10.090164   22261 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0213 16:01:10.090189   22261 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0213 16:01:10.090220   22261 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0213 16:01:10.090227   22261 kubeadm.go:322] 
	I0213 16:01:10.090313   22261 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0213 16:01:10.090389   22261 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0213 16:01:10.090460   22261 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0213 16:01:10.090495   22261 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0213 16:01:10.090560   22261 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0213 16:01:10.090601   22261 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0213 16:01:10.095257   22261 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0213 16:01:10.095331   22261 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0213 16:01:10.095437   22261 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0213 16:01:10.095526   22261 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 16:01:10.095592   22261 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0213 16:01:10.095648   22261 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0213 16:01:10.095677   22261 kubeadm.go:406] StartCluster complete in 3m54.700458975s
	I0213 16:01:10.095757   22261 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:01:10.114625   22261 logs.go:276] 0 containers: []
	W0213 16:01:10.114639   22261 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:01:10.114702   22261 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:01:10.135046   22261 logs.go:276] 0 containers: []
	W0213 16:01:10.135060   22261 logs.go:278] No container was found matching "etcd"
	I0213 16:01:10.135126   22261 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:01:10.152261   22261 logs.go:276] 0 containers: []
	W0213 16:01:10.152275   22261 logs.go:278] No container was found matching "coredns"
	I0213 16:01:10.152346   22261 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:01:10.170655   22261 logs.go:276] 0 containers: []
	W0213 16:01:10.170669   22261 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:01:10.170736   22261 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:01:10.189888   22261 logs.go:276] 0 containers: []
	W0213 16:01:10.189902   22261 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:01:10.189965   22261 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:01:10.208544   22261 logs.go:276] 0 containers: []
	W0213 16:01:10.208558   22261 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:01:10.208629   22261 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:01:10.227784   22261 logs.go:276] 0 containers: []
	W0213 16:01:10.227801   22261 logs.go:278] No container was found matching "kindnet"
	I0213 16:01:10.227810   22261 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:01:10.227820   22261 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:01:10.298369   22261 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:01:10.298381   22261 logs.go:123] Gathering logs for Docker ...
	I0213 16:01:10.298388   22261 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:01:10.320291   22261 logs.go:123] Gathering logs for container status ...
	I0213 16:01:10.320306   22261 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:01:10.381755   22261 logs.go:123] Gathering logs for kubelet ...
	I0213 16:01:10.381769   22261 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:01:10.426982   22261 logs.go:123] Gathering logs for dmesg ...
	I0213 16:01:10.426999   22261 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0213 16:01:10.448419   22261 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0213 16:01:10.448439   22261 out.go:239] * 
	* 
	W0213 16:01:10.448489   22261 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0213 16:01:10.448512   22261 out.go:239] * 
	* 
	W0213 16:01:10.449252   22261 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 16:01:10.512909   22261 out.go:177] 
	W0213 16:01:10.533668   22261 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0213 16:01:10.533701   22261 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0213 16:01:10.533725   22261 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0213 16:01:10.554758   22261 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-745000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-745000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-745000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7",
	        "Created": "2024-02-13T23:56:55.870618044Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 356690,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-13T23:56:56.188240128Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7/hostname",
	        "HostsPath": "/var/lib/docker/containers/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7/hosts",
	        "LogPath": "/var/lib/docker/containers/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7-json.log",
	        "Name": "/old-k8s-version-745000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-745000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-745000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3b6060eec3a1586a55a82fcdff198c231a72ae85ee6f6b4c6b13c556de95556d-init/diff:/var/lib/docker/overlay2/17d01b22a52da825ae58e67decfe3f4c8ae2f6fe80510c1be556e233e058ce7a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3b6060eec3a1586a55a82fcdff198c231a72ae85ee6f6b4c6b13c556de95556d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3b6060eec3a1586a55a82fcdff198c231a72ae85ee6f6b4c6b13c556de95556d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3b6060eec3a1586a55a82fcdff198c231a72ae85ee6f6b4c6b13c556de95556d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-745000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-745000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-745000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-745000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-745000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "970e912762f28c45ce8dc481d3366240a812029a655c36560fed0e5d51dcb46d",
	            "SandboxKey": "/var/run/docker/netns/970e912762f2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56464"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56465"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56466"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56467"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56463"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-745000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2b4f372aa246",
	                        "old-k8s-version-745000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "e9fab362389ee13cca953b7169efcc99796a0092a501ddc4284447becaba8d37",
	                    "EndpointID": "3d8a93baad79ed1f421248ed1cca2ffc158043d55f4dc5df0ce630c93615549b",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-745000",
	                        "2b4f372aa246"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-745000 -n old-k8s-version-745000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-745000 -n old-k8s-version-745000: exit status 6 (398.299732ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 16:01:11.098509   23117 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-745000" does not appear in /Users/jenkins/minikube-integration/18169-6320/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-745000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (259.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-745000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-745000 create -f testdata/busybox.yaml: exit status 1 (39.038271ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-745000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-745000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-745000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-745000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7",
	        "Created": "2024-02-13T23:56:55.870618044Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 356690,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-13T23:56:56.188240128Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7/hostname",
	        "HostsPath": "/var/lib/docker/containers/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7/hosts",
	        "LogPath": "/var/lib/docker/containers/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7-json.log",
	        "Name": "/old-k8s-version-745000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-745000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-745000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3b6060eec3a1586a55a82fcdff198c231a72ae85ee6f6b4c6b13c556de95556d-init/diff:/var/lib/docker/overlay2/17d01b22a52da825ae58e67decfe3f4c8ae2f6fe80510c1be556e233e058ce7a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3b6060eec3a1586a55a82fcdff198c231a72ae85ee6f6b4c6b13c556de95556d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3b6060eec3a1586a55a82fcdff198c231a72ae85ee6f6b4c6b13c556de95556d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3b6060eec3a1586a55a82fcdff198c231a72ae85ee6f6b4c6b13c556de95556d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-745000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-745000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-745000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-745000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-745000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "970e912762f28c45ce8dc481d3366240a812029a655c36560fed0e5d51dcb46d",
	            "SandboxKey": "/var/run/docker/netns/970e912762f2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56464"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56465"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56466"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56467"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56463"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-745000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2b4f372aa246",
	                        "old-k8s-version-745000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "e9fab362389ee13cca953b7169efcc99796a0092a501ddc4284447becaba8d37",
	                    "EndpointID": "3d8a93baad79ed1f421248ed1cca2ffc158043d55f4dc5df0ce630c93615549b",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-745000",
	                        "2b4f372aa246"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-745000 -n old-k8s-version-745000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-745000 -n old-k8s-version-745000: exit status 6 (412.015518ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 16:01:11.606538   23130 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-745000" does not appear in /Users/jenkins/minikube-integration/18169-6320/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-745000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-745000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-745000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7",
	        "Created": "2024-02-13T23:56:55.870618044Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 356690,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-13T23:56:56.188240128Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7/hostname",
	        "HostsPath": "/var/lib/docker/containers/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7/hosts",
	        "LogPath": "/var/lib/docker/containers/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7-json.log",
	        "Name": "/old-k8s-version-745000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-745000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-745000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3b6060eec3a1586a55a82fcdff198c231a72ae85ee6f6b4c6b13c556de95556d-init/diff:/var/lib/docker/overlay2/17d01b22a52da825ae58e67decfe3f4c8ae2f6fe80510c1be556e233e058ce7a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3b6060eec3a1586a55a82fcdff198c231a72ae85ee6f6b4c6b13c556de95556d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3b6060eec3a1586a55a82fcdff198c231a72ae85ee6f6b4c6b13c556de95556d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3b6060eec3a1586a55a82fcdff198c231a72ae85ee6f6b4c6b13c556de95556d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-745000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-745000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-745000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-745000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-745000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "970e912762f28c45ce8dc481d3366240a812029a655c36560fed0e5d51dcb46d",
	            "SandboxKey": "/var/run/docker/netns/970e912762f2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56464"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56465"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56466"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56467"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56463"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-745000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2b4f372aa246",
	                        "old-k8s-version-745000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "e9fab362389ee13cca953b7169efcc99796a0092a501ddc4284447becaba8d37",
	                    "EndpointID": "3d8a93baad79ed1f421248ed1cca2ffc158043d55f4dc5df0ce630c93615549b",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-745000",
	                        "2b4f372aa246"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-745000 -n old-k8s-version-745000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-745000 -n old-k8s-version-745000: exit status 6 (406.746878ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 16:01:12.066645   23142 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-745000" does not appear in /Users/jenkins/minikube-integration/18169-6320/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-745000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (101.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-745000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0213 16:01:14.099009    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubenet-208000/client.crt: no such file or directory
E0213 16:01:14.104463    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubenet-208000/client.crt: no such file or directory
E0213 16:01:14.115083    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubenet-208000/client.crt: no such file or directory
E0213 16:01:14.135285    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubenet-208000/client.crt: no such file or directory
E0213 16:01:14.175590    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubenet-208000/client.crt: no such file or directory
E0213 16:01:14.257811    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubenet-208000/client.crt: no such file or directory
E0213 16:01:14.418169    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubenet-208000/client.crt: no such file or directory
E0213 16:01:14.738821    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubenet-208000/client.crt: no such file or directory
E0213 16:01:15.378997    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubenet-208000/client.crt: no such file or directory
E0213 16:01:16.404632    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/false-208000/client.crt: no such file or directory
E0213 16:01:16.659483    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubenet-208000/client.crt: no such file or directory
E0213 16:01:19.219785    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubenet-208000/client.crt: no such file or directory
E0213 16:01:24.341848    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubenet-208000/client.crt: no such file or directory
E0213 16:01:28.239609    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/calico-208000/client.crt: no such file or directory
E0213 16:01:34.279263    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/bridge-208000/client.crt: no such file or directory
E0213 16:01:34.285682    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/bridge-208000/client.crt: no such file or directory
E0213 16:01:34.296114    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/bridge-208000/client.crt: no such file or directory
E0213 16:01:34.316310    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/bridge-208000/client.crt: no such file or directory
E0213 16:01:34.356552    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/bridge-208000/client.crt: no such file or directory
E0213 16:01:34.437431    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/bridge-208000/client.crt: no such file or directory
E0213 16:01:34.582329    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubenet-208000/client.crt: no such file or directory
E0213 16:01:34.598670    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/bridge-208000/client.crt: no such file or directory
E0213 16:01:34.918799    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/bridge-208000/client.crt: no such file or directory
E0213 16:01:35.560178    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/bridge-208000/client.crt: no such file or directory
E0213 16:01:36.840339    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/bridge-208000/client.crt: no such file or directory
E0213 16:01:39.401484    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/bridge-208000/client.crt: no such file or directory
E0213 16:01:44.522532    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/bridge-208000/client.crt: no such file or directory
E0213 16:01:45.810447    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/custom-flannel-208000/client.crt: no such file or directory
E0213 16:01:54.762614    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/bridge-208000/client.crt: no such file or directory
E0213 16:01:55.062212    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubenet-208000/client.crt: no such file or directory
E0213 16:01:55.924563    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/calico-208000/client.crt: no such file or directory
E0213 16:02:13.553852    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/custom-flannel-208000/client.crt: no such file or directory
E0213 16:02:15.211660    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/flannel-208000/client.crt: no such file or directory
E0213 16:02:15.242432    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/bridge-208000/client.crt: no such file or directory
E0213 16:02:36.021641    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubenet-208000/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-745000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m40.591781431s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-745000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-745000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-745000 describe deploy/metrics-server -n kube-system: exit status 1 (38.854077ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-745000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-745000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-745000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-745000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7",
	        "Created": "2024-02-13T23:56:55.870618044Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 356690,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-13T23:56:56.188240128Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7/hostname",
	        "HostsPath": "/var/lib/docker/containers/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7/hosts",
	        "LogPath": "/var/lib/docker/containers/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7-json.log",
	        "Name": "/old-k8s-version-745000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-745000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-745000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3b6060eec3a1586a55a82fcdff198c231a72ae85ee6f6b4c6b13c556de95556d-init/diff:/var/lib/docker/overlay2/17d01b22a52da825ae58e67decfe3f4c8ae2f6fe80510c1be556e233e058ce7a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3b6060eec3a1586a55a82fcdff198c231a72ae85ee6f6b4c6b13c556de95556d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3b6060eec3a1586a55a82fcdff198c231a72ae85ee6f6b4c6b13c556de95556d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3b6060eec3a1586a55a82fcdff198c231a72ae85ee6f6b4c6b13c556de95556d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-745000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-745000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-745000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-745000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-745000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "970e912762f28c45ce8dc481d3366240a812029a655c36560fed0e5d51dcb46d",
	            "SandboxKey": "/var/run/docker/netns/970e912762f2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56464"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56465"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56466"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56467"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56463"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-745000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2b4f372aa246",
	                        "old-k8s-version-745000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "e9fab362389ee13cca953b7169efcc99796a0092a501ddc4284447becaba8d37",
	                    "EndpointID": "3d8a93baad79ed1f421248ed1cca2ffc158043d55f4dc5df0ce630c93615549b",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-745000",
	                        "2b4f372aa246"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-745000 -n old-k8s-version-745000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-745000 -n old-k8s-version-745000: exit status 6 (411.728175ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 16:02:53.163019   23172 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-745000" does not appear in /Users/jenkins/minikube-integration/18169-6320/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-745000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (101.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (510.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-745000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0213 16:02:56.201987    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/bridge-208000/client.crt: no such file or directory
E0213 16:03:05.119959    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/enable-default-cni-208000/client.crt: no such file or directory
E0213 16:03:32.553157    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/false-208000/client.crt: no such file or directory
E0213 16:03:32.809702    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/enable-default-cni-208000/client.crt: no such file or directory
E0213 16:03:57.940690    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubenet-208000/client.crt: no such file or directory
E0213 16:04:00.241915    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/false-208000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-745000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m27.029953097s)

                                                
                                                
-- stdout --
	* [old-k8s-version-745000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18169
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18169-6320/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18169-6320/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-745000 in cluster old-k8s-version-745000
	* Pulling base image v0.0.42-1704759386-17866 ...
	* Restarting existing docker container for "old-k8s-version-745000" ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 16:02:55.220249   23204 out.go:291] Setting OutFile to fd 1 ...
	I0213 16:02:55.220510   23204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 16:02:55.220516   23204 out.go:304] Setting ErrFile to fd 2...
	I0213 16:02:55.220520   23204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 16:02:55.220702   23204 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18169-6320/.minikube/bin
	I0213 16:02:55.222230   23204 out.go:298] Setting JSON to false
	I0213 16:02:55.245118   23204 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5835,"bootTime":1707863140,"procs":513,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0213 16:02:55.245215   23204 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 16:02:55.266839   23204 out.go:177] * [old-k8s-version-745000] minikube v1.32.0 on Darwin 14.3.1
	I0213 16:02:55.310476   23204 out.go:177]   - MINIKUBE_LOCATION=18169
	I0213 16:02:55.310552   23204 notify.go:220] Checking for updates...
	I0213 16:02:55.354395   23204 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18169-6320/kubeconfig
	I0213 16:02:55.375460   23204 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0213 16:02:55.397427   23204 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 16:02:55.420466   23204 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18169-6320/.minikube
	I0213 16:02:55.441108   23204 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 16:02:55.462707   23204 config.go:182] Loaded profile config "old-k8s-version-745000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0213 16:02:55.486311   23204 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0213 16:02:55.507225   23204 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 16:02:55.563042   23204 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0213 16:02:55.563209   23204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 16:02:55.673104   23204 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-14 00:02:55.662565997 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 16:02:55.694942   23204 out.go:177] * Using the docker driver based on existing profile
	I0213 16:02:55.716822   23204 start.go:298] selected driver: docker
	I0213 16:02:55.716846   23204 start.go:902] validating driver "docker" against &{Name:old-k8s-version-745000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-745000 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 16:02:55.716973   23204 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 16:02:55.720872   23204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 16:02:55.827941   23204 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-14 00:02:55.818442549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 16:02:55.828136   23204 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 16:02:55.828194   23204 cni.go:84] Creating CNI manager for ""
	I0213 16:02:55.828206   23204 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0213 16:02:55.828217   23204 start_flags.go:321] config:
	{Name:old-k8s-version-745000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-745000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 16:02:55.887516   23204 out.go:177] * Starting control plane node old-k8s-version-745000 in cluster old-k8s-version-745000
	I0213 16:02:55.909338   23204 cache.go:121] Beginning downloading kic base image for docker with docker
	I0213 16:02:55.930359   23204 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0213 16:02:55.972256   23204 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0213 16:02:55.972286   23204 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0213 16:02:55.972312   23204 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0213 16:02:55.972326   23204 cache.go:56] Caching tarball of preloaded images
	I0213 16:02:55.972464   23204 preload.go:174] Found /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0213 16:02:55.972491   23204 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0213 16:02:55.972620   23204 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/config.json ...
	I0213 16:02:56.024266   23204 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0213 16:02:56.024291   23204 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0213 16:02:56.024310   23204 cache.go:194] Successfully downloaded all kic artifacts
	I0213 16:02:56.024362   23204 start.go:365] acquiring machines lock for old-k8s-version-745000: {Name:mkd7f9273d4ef06a0c4934b33030a9cfbc88fa9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 16:02:56.024456   23204 start.go:369] acquired machines lock for "old-k8s-version-745000" in 71.599µs
	I0213 16:02:56.024478   23204 start.go:96] Skipping create...Using existing machine configuration
	I0213 16:02:56.024486   23204 fix.go:54] fixHost starting: 
	I0213 16:02:56.024735   23204 cli_runner.go:164] Run: docker container inspect old-k8s-version-745000 --format={{.State.Status}}
	I0213 16:02:56.076619   23204 fix.go:102] recreateIfNeeded on old-k8s-version-745000: state=Stopped err=<nil>
	W0213 16:02:56.076650   23204 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 16:02:56.098552   23204 out.go:177] * Restarting existing docker container for "old-k8s-version-745000" ...
	I0213 16:02:56.142098   23204 cli_runner.go:164] Run: docker start old-k8s-version-745000
	I0213 16:02:56.385942   23204 cli_runner.go:164] Run: docker container inspect old-k8s-version-745000 --format={{.State.Status}}
	I0213 16:02:56.445708   23204 kic.go:430] container "old-k8s-version-745000" state is running.
	I0213 16:02:56.446356   23204 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-745000
	I0213 16:02:56.505629   23204 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/config.json ...
	I0213 16:02:56.506097   23204 machine.go:88] provisioning docker machine ...
	I0213 16:02:56.506125   23204 ubuntu.go:169] provisioning hostname "old-k8s-version-745000"
	I0213 16:02:56.506208   23204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-745000
	I0213 16:02:56.576128   23204 main.go:141] libmachine: Using SSH client type: native
	I0213 16:02:56.576546   23204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 56672 <nil> <nil>}
	I0213 16:02:56.576563   23204 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-745000 && echo "old-k8s-version-745000" | sudo tee /etc/hostname
	I0213 16:02:56.577734   23204 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0213 16:02:59.739818   23204 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-745000
	
	I0213 16:02:59.739914   23204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-745000
	I0213 16:02:59.791552   23204 main.go:141] libmachine: Using SSH client type: native
	I0213 16:02:59.791855   23204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 56672 <nil> <nil>}
	I0213 16:02:59.791868   23204 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-745000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-745000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-745000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 16:02:59.929096   23204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 16:02:59.929115   23204 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/18169-6320/.minikube CaCertPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18169-6320/.minikube}
	I0213 16:02:59.929136   23204 ubuntu.go:177] setting up certificates
	I0213 16:02:59.929147   23204 provision.go:83] configureAuth start
	I0213 16:02:59.929222   23204 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-745000
	I0213 16:02:59.981327   23204 provision.go:138] copyHostCerts
	I0213 16:02:59.981437   23204 exec_runner.go:144] found /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.pem, removing ...
	I0213 16:02:59.981448   23204 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.pem
	I0213 16:02:59.981584   23204 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.pem (1078 bytes)
	I0213 16:02:59.981815   23204 exec_runner.go:144] found /Users/jenkins/minikube-integration/18169-6320/.minikube/cert.pem, removing ...
	I0213 16:02:59.981821   23204 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18169-6320/.minikube/cert.pem
	I0213 16:02:59.981924   23204 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18169-6320/.minikube/cert.pem (1123 bytes)
	I0213 16:02:59.982098   23204 exec_runner.go:144] found /Users/jenkins/minikube-integration/18169-6320/.minikube/key.pem, removing ...
	I0213 16:02:59.982104   23204 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18169-6320/.minikube/key.pem
	I0213 16:02:59.982182   23204 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18169-6320/.minikube/key.pem (1675 bytes)
	I0213 16:02:59.982335   23204 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-745000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-745000]
	I0213 16:03:00.126715   23204 provision.go:172] copyRemoteCerts
	I0213 16:03:00.126785   23204 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 16:03:00.126839   23204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-745000
	I0213 16:03:00.178696   23204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56672 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/old-k8s-version-745000/id_rsa Username:docker}
	I0213 16:03:00.283288   23204 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 16:03:00.323073   23204 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0213 16:03:00.363257   23204 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0213 16:03:00.403531   23204 provision.go:86] duration metric: configureAuth took 474.378704ms
	I0213 16:03:00.403547   23204 ubuntu.go:193] setting minikube options for container-runtime
	I0213 16:03:00.403716   23204 config.go:182] Loaded profile config "old-k8s-version-745000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0213 16:03:00.403788   23204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-745000
	I0213 16:03:00.457115   23204 main.go:141] libmachine: Using SSH client type: native
	I0213 16:03:00.457397   23204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 56672 <nil> <nil>}
	I0213 16:03:00.457407   23204 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0213 16:03:00.598829   23204 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0213 16:03:00.598845   23204 ubuntu.go:71] root file system type: overlay
	I0213 16:03:00.598930   23204 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0213 16:03:00.599010   23204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-745000
	I0213 16:03:00.651509   23204 main.go:141] libmachine: Using SSH client type: native
	I0213 16:03:00.651804   23204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 56672 <nil> <nil>}
	I0213 16:03:00.651856   23204 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0213 16:03:00.813064   23204 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0213 16:03:00.813156   23204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-745000
	I0213 16:03:00.865625   23204 main.go:141] libmachine: Using SSH client type: native
	I0213 16:03:00.865923   23204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 56672 <nil> <nil>}
	I0213 16:03:00.865936   23204 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0213 16:03:01.012447   23204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 16:03:01.012469   23204 machine.go:91] provisioned docker machine in 4.506460894s
	I0213 16:03:01.012490   23204 start.go:300] post-start starting for "old-k8s-version-745000" (driver="docker")
	I0213 16:03:01.012503   23204 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 16:03:01.012563   23204 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 16:03:01.012616   23204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-745000
	I0213 16:03:01.066799   23204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56672 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/old-k8s-version-745000/id_rsa Username:docker}
	I0213 16:03:01.172326   23204 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 16:03:01.176650   23204 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0213 16:03:01.176676   23204 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0213 16:03:01.176684   23204 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0213 16:03:01.176689   23204 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0213 16:03:01.176701   23204 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18169-6320/.minikube/addons for local assets ...
	I0213 16:03:01.176798   23204 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18169-6320/.minikube/files for local assets ...
	I0213 16:03:01.176980   23204 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem -> 67762.pem in /etc/ssl/certs
	I0213 16:03:01.177204   23204 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 16:03:01.192086   23204 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem --> /etc/ssl/certs/67762.pem (1708 bytes)
	I0213 16:03:01.232066   23204 start.go:303] post-start completed in 219.566864ms
	I0213 16:03:01.232196   23204 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0213 16:03:01.232278   23204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-745000
	I0213 16:03:01.284811   23204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56672 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/old-k8s-version-745000/id_rsa Username:docker}
	I0213 16:03:01.379458   23204 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0213 16:03:01.384616   23204 fix.go:56] fixHost completed within 5.360243363s
	I0213 16:03:01.384635   23204 start.go:83] releasing machines lock for "old-k8s-version-745000", held for 5.360287425s
	I0213 16:03:01.384731   23204 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-745000
	I0213 16:03:01.437746   23204 ssh_runner.go:195] Run: cat /version.json
	I0213 16:03:01.437765   23204 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 16:03:01.437830   23204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-745000
	I0213 16:03:01.437838   23204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-745000
	I0213 16:03:01.494653   23204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56672 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/old-k8s-version-745000/id_rsa Username:docker}
	I0213 16:03:01.494672   23204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56672 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/old-k8s-version-745000/id_rsa Username:docker}
	I0213 16:03:01.697388   23204 ssh_runner.go:195] Run: systemctl --version
	I0213 16:03:01.702426   23204 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 16:03:01.707373   23204 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 16:03:01.707447   23204 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0213 16:03:01.723022   23204 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0213 16:03:01.738098   23204 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0213 16:03:01.738117   23204 start.go:475] detecting cgroup driver to use...
	I0213 16:03:01.738128   23204 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 16:03:01.738241   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 16:03:01.766012   23204 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0213 16:03:01.782212   23204 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0213 16:03:01.798724   23204 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0213 16:03:01.798789   23204 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0213 16:03:01.815211   23204 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 16:03:01.831868   23204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0213 16:03:01.848081   23204 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 16:03:01.864884   23204 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 16:03:01.880807   23204 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0213 16:03:01.897319   23204 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 16:03:01.912835   23204 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 16:03:01.928194   23204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 16:03:01.994490   23204 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0213 16:03:02.080217   23204 start.go:475] detecting cgroup driver to use...
	I0213 16:03:02.080236   23204 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 16:03:02.080295   23204 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0213 16:03:02.099108   23204 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0213 16:03:02.099182   23204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0213 16:03:02.120059   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 16:03:02.152063   23204 ssh_runner.go:195] Run: which cri-dockerd
	I0213 16:03:02.157305   23204 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0213 16:03:02.173345   23204 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0213 16:03:02.205140   23204 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0213 16:03:02.294314   23204 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0213 16:03:02.381434   23204 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0213 16:03:02.381594   23204 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0213 16:03:02.410804   23204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 16:03:02.473626   23204 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 16:03:02.725123   23204 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 16:03:02.751359   23204 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 16:03:02.820504   23204 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	I0213 16:03:02.820578   23204 cli_runner.go:164] Run: docker exec -t old-k8s-version-745000 dig +short host.docker.internal
	I0213 16:03:02.937424   23204 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0213 16:03:02.937523   23204 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0213 16:03:02.942427   23204 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 16:03:02.962936   23204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-745000
	I0213 16:03:03.019561   23204 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0213 16:03:03.019649   23204 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 16:03:03.040643   23204 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0213 16:03:03.040663   23204 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0213 16:03:03.040728   23204 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0213 16:03:03.056435   23204 ssh_runner.go:195] Run: which lz4
	I0213 16:03:03.061464   23204 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0213 16:03:03.066348   23204 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 16:03:03.066384   23204 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0213 16:03:09.271068   23204 docker.go:649] Took 6.209824 seconds to copy over tarball
	I0213 16:03:09.271166   23204 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 16:03:10.893078   23204 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.621902545s)
	I0213 16:03:10.893093   23204 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 16:03:10.945113   23204 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0213 16:03:10.960926   23204 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0213 16:03:10.990968   23204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 16:03:11.052496   23204 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 16:03:11.738963   23204 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 16:03:11.759632   23204 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0213 16:03:11.759643   23204 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0213 16:03:11.759657   23204 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0213 16:03:11.765288   23204 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 16:03:11.765288   23204 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0213 16:03:11.765833   23204 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0213 16:03:11.765974   23204 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0213 16:03:11.766056   23204 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0213 16:03:11.766105   23204 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0213 16:03:11.766122   23204 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0213 16:03:11.766526   23204 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 16:03:11.770836   23204 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0213 16:03:11.771105   23204 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0213 16:03:11.773789   23204 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 16:03:11.774864   23204 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0213 16:03:11.774902   23204 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0213 16:03:11.774928   23204 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0213 16:03:11.774951   23204 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 16:03:11.775003   23204 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0213 16:03:13.699148   23204 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0213 16:03:13.720796   23204 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0213 16:03:13.720837   23204 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0213 16:03:13.720902   23204 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0213 16:03:13.741763   23204 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0213 16:03:13.785695   23204 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0213 16:03:13.805711   23204 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0213 16:03:13.805736   23204 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0213 16:03:13.805801   23204 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0213 16:03:13.824766   23204 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0213 16:03:13.825239   23204 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0213 16:03:13.844387   23204 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0213 16:03:13.844419   23204 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0213 16:03:13.844483   23204 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0213 16:03:13.845429   23204 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0213 16:03:13.845952   23204 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 16:03:13.861499   23204 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0213 16:03:13.866546   23204 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0213 16:03:13.870014   23204 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0213 16:03:13.870725   23204 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0213 16:03:13.870748   23204 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0213 16:03:13.870789   23204 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0213 16:03:13.871613   23204 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0213 16:03:13.871633   23204 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 16:03:13.871679   23204 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 16:03:13.885007   23204 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0213 16:03:13.885042   23204 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0213 16:03:13.885126   23204 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0213 16:03:13.897882   23204 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0213 16:03:13.897907   23204 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0213 16:03:13.897917   23204 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0213 16:03:13.897949   23204 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0213 16:03:13.898010   23204 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0213 16:03:13.910624   23204 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0213 16:03:13.920144   23204 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0213 16:03:14.532861   23204 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 16:03:14.554978   23204 cache_images.go:92] LoadImages completed in 2.795368672s
	W0213 16:03:14.555025   23204 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0213 16:03:14.555101   23204 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0213 16:03:14.610755   23204 cni.go:84] Creating CNI manager for ""
	I0213 16:03:14.610772   23204 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0213 16:03:14.610801   23204 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 16:03:14.610819   23204 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-745000 NodeName:old-k8s-version-745000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0213 16:03:14.610918   23204 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-745000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-745000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 16:03:14.610978   23204 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-745000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-745000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 16:03:14.611038   23204 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0213 16:03:14.626589   23204 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 16:03:14.626654   23204 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 16:03:14.646001   23204 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0213 16:03:14.676156   23204 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 16:03:14.709587   23204 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0213 16:03:14.740230   23204 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0213 16:03:14.745410   23204 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 16:03:14.762440   23204 certs.go:56] Setting up /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000 for IP: 192.168.67.2
	I0213 16:03:14.762459   23204 certs.go:190] acquiring lock for shared ca certs: {Name:mkc037f48c69539d66bd92ede4890b05c28518b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 16:03:14.762651   23204 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.key
	I0213 16:03:14.762728   23204 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18169-6320/.minikube/proxy-client-ca.key
	I0213 16:03:14.762831   23204 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/client.key
	I0213 16:03:14.762914   23204 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/apiserver.key.c7fa3a9e
	I0213 16:03:14.762980   23204 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/proxy-client.key
	I0213 16:03:14.763189   23204 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/6776.pem (1338 bytes)
	W0213 16:03:14.763235   23204 certs.go:433] ignoring /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/6776_empty.pem, impossibly tiny 0 bytes
	I0213 16:03:14.763245   23204 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 16:03:14.763290   23204 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem (1078 bytes)
	I0213 16:03:14.763328   23204 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem (1123 bytes)
	I0213 16:03:14.763369   23204 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/key.pem (1675 bytes)
	I0213 16:03:14.763453   23204 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem (1708 bytes)
	I0213 16:03:14.763959   23204 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 16:03:14.805094   23204 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0213 16:03:14.846783   23204 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 16:03:14.888617   23204 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/old-k8s-version-745000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0213 16:03:14.931581   23204 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 16:03:14.972953   23204 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0213 16:03:15.014690   23204 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 16:03:15.058199   23204 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 16:03:15.098754   23204 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem --> /usr/share/ca-certificates/67762.pem (1708 bytes)
	I0213 16:03:15.141247   23204 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 16:03:15.182684   23204 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/6776.pem --> /usr/share/ca-certificates/6776.pem (1338 bytes)
	I0213 16:03:15.225349   23204 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 16:03:15.282128   23204 ssh_runner.go:195] Run: openssl version
	I0213 16:03:15.288506   23204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67762.pem && ln -fs /usr/share/ca-certificates/67762.pem /etc/ssl/certs/67762.pem"
	I0213 16:03:15.310122   23204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67762.pem
	I0213 16:03:15.315190   23204 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 23:02 /usr/share/ca-certificates/67762.pem
	I0213 16:03:15.315240   23204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67762.pem
	I0213 16:03:15.322335   23204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67762.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 16:03:15.337548   23204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 16:03:15.353630   23204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 16:03:15.358208   23204 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 22:54 /usr/share/ca-certificates/minikubeCA.pem
	I0213 16:03:15.358262   23204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 16:03:15.364896   23204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 16:03:15.379692   23204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6776.pem && ln -fs /usr/share/ca-certificates/6776.pem /etc/ssl/certs/6776.pem"
	I0213 16:03:15.396618   23204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6776.pem
	I0213 16:03:15.400995   23204 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 23:02 /usr/share/ca-certificates/6776.pem
	I0213 16:03:15.401049   23204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6776.pem
	I0213 16:03:15.407467   23204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6776.pem /etc/ssl/certs/51391683.0"
	I0213 16:03:15.422823   23204 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 16:03:15.427090   23204 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 16:03:15.434994   23204 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 16:03:15.442095   23204 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 16:03:15.449024   23204 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 16:03:15.455668   23204 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 16:03:15.462361   23204 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 16:03:15.469047   23204 kubeadm.go:404] StartCluster: {Name:old-k8s-version-745000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-745000 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 16:03:15.469168   23204 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 16:03:15.487852   23204 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 16:03:15.504598   23204 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 16:03:15.504618   23204 kubeadm.go:636] restartCluster start
	I0213 16:03:15.504672   23204 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 16:03:15.520224   23204 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:03:15.520315   23204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-745000
	I0213 16:03:15.573758   23204 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-745000" does not appear in /Users/jenkins/minikube-integration/18169-6320/kubeconfig
	I0213 16:03:15.573929   23204 kubeconfig.go:146] "old-k8s-version-745000" context is missing from /Users/jenkins/minikube-integration/18169-6320/kubeconfig - will repair!
	I0213 16:03:15.574236   23204 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/kubeconfig: {Name:mk44cd4b9e88d1002bf6fa3af05bfaa649323b25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 16:03:15.575850   23204 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 16:03:15.591964   23204 api_server.go:166] Checking apiserver status ...
	I0213 16:03:15.592032   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:03:15.608434   23204 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:03:16.092637   23204 api_server.go:166] Checking apiserver status ...
	I0213 16:03:16.092748   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:03:16.110874   23204 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:03:16.592478   23204 api_server.go:166] Checking apiserver status ...
	I0213 16:03:16.592625   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:03:16.609637   23204 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:03:17.092012   23204 api_server.go:166] Checking apiserver status ...
	I0213 16:03:17.092129   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:03:17.109781   23204 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:03:17.592080   23204 api_server.go:166] Checking apiserver status ...
	I0213 16:03:17.592203   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:03:17.609744   23204 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:03:18.092213   23204 api_server.go:166] Checking apiserver status ...
	I0213 16:03:18.092286   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:03:18.109610   23204 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:03:18.592555   23204 api_server.go:166] Checking apiserver status ...
	I0213 16:03:18.592664   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:03:18.610707   23204 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:03:19.092154   23204 api_server.go:166] Checking apiserver status ...
	I0213 16:03:19.092258   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:03:19.109570   23204 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:03:19.592204   23204 api_server.go:166] Checking apiserver status ...
	I0213 16:03:19.592359   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:03:19.609626   23204 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:03:20.092029   23204 api_server.go:166] Checking apiserver status ...
	I0213 16:03:20.092101   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:03:20.109045   23204 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:03:20.592264   23204 api_server.go:166] Checking apiserver status ...
	I0213 16:03:20.592342   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:03:20.609470   23204 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:03:21.092373   23204 api_server.go:166] Checking apiserver status ...
	I0213 16:03:21.092508   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:03:21.109697   23204 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:03:21.592322   23204 api_server.go:166] Checking apiserver status ...
	I0213 16:03:21.592483   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:03:21.609876   23204 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:03:22.092091   23204 api_server.go:166] Checking apiserver status ...
	I0213 16:03:22.092220   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:03:22.110353   23204 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:03:22.591899   23204 api_server.go:166] Checking apiserver status ...
	I0213 16:03:22.592016   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:03:22.610088   23204 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:03:23.092130   23204 api_server.go:166] Checking apiserver status ...
	I0213 16:03:23.092227   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:03:23.109404   23204 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:03:23.592061   23204 api_server.go:166] Checking apiserver status ...
	I0213 16:03:23.592149   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:03:23.609263   23204 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:03:24.091924   23204 api_server.go:166] Checking apiserver status ...
	I0213 16:03:24.092061   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:03:24.111216   23204 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:03:24.592263   23204 api_server.go:166] Checking apiserver status ...
	I0213 16:03:24.592408   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:03:24.610582   23204 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:03:25.092020   23204 api_server.go:166] Checking apiserver status ...
	I0213 16:03:25.092106   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:03:25.109882   23204 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:03:25.593870   23204 api_server.go:166] Checking apiserver status ...
	I0213 16:03:25.594001   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:03:25.612780   23204 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:03:25.612798   23204 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0213 16:03:25.612811   23204 kubeadm.go:1135] stopping kube-system containers ...
	I0213 16:03:25.612875   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 16:03:25.633448   23204 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 16:03:25.651458   23204 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 16:03:25.666602   23204 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5695 Feb 13 23:59 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5731 Feb 13 23:59 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5795 Feb 13 23:59 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5679 Feb 13 23:59 /etc/kubernetes/scheduler.conf
	
	I0213 16:03:25.666665   23204 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0213 16:03:25.681580   23204 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0213 16:03:25.696387   23204 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0213 16:03:25.711112   23204 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0213 16:03:25.726392   23204 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 16:03:25.741581   23204 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 16:03:25.741595   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 16:03:25.821544   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 16:03:26.229275   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 16:03:26.435064   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 16:03:26.599578   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 16:03:26.707055   23204 api_server.go:52] waiting for apiserver process to appear ...
	I0213 16:03:26.707121   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:27.207260   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:27.707624   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:28.208180   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:28.707201   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:29.207674   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:29.707197   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:30.208074   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:30.707411   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:31.207336   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:31.709245   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:32.207211   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:32.707231   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:33.209107   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:33.707387   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:34.207124   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:34.707998   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:35.207066   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:35.707068   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:36.207311   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:36.706965   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:37.207962   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:37.708148   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:38.207066   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:38.707016   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:39.208069   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:39.707127   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:40.207634   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:40.707571   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:41.206873   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:41.707546   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:42.207401   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:42.707935   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:43.207582   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:43.706943   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:44.207925   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:44.707143   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:45.206944   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:45.706957   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:46.206794   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:46.707225   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:47.207809   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:47.706951   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:48.208878   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:48.706903   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:49.207765   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:49.706811   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:50.206750   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:50.706670   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:51.207503   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:51.706664   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:52.206888   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:52.706793   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:53.207774   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:53.706785   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:54.206621   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:54.707601   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:55.207681   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:55.707090   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:56.206726   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:56.706533   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:57.206846   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:57.706886   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:58.206837   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:58.706658   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:59.207443   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:03:59.707432   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:00.206661   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:00.707483   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:01.206898   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:01.706619   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:02.207583   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:02.707914   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:03.206575   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:03.706585   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:04.206571   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:04.706669   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:05.206440   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:05.707151   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:06.207649   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:06.706567   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:07.207415   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:07.706461   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:08.206687   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:08.706441   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:09.206996   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:09.707099   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:10.206312   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:10.706307   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:11.206259   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:11.706441   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:12.206642   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:12.706252   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:13.206297   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:13.706376   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:14.206253   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:14.707538   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:15.206409   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:15.706173   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:16.206298   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:16.706138   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:17.206107   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:17.706189   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:18.206772   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:18.706129   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:19.206122   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:19.706166   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:20.206644   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:20.706582   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:21.206138   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:21.706221   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:22.206013   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:22.707199   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:23.206442   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:23.706114   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:24.206798   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:24.705946   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:25.206171   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:25.706597   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:26.206087   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:26.706134   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:04:26.731953   23204 logs.go:276] 0 containers: []
	W0213 16:04:26.731978   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:04:26.732050   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:04:26.751400   23204 logs.go:276] 0 containers: []
	W0213 16:04:26.751419   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:04:26.751513   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:04:26.773468   23204 logs.go:276] 0 containers: []
	W0213 16:04:26.773484   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:04:26.773603   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:04:26.793543   23204 logs.go:276] 0 containers: []
	W0213 16:04:26.793563   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:04:26.793664   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:04:26.817338   23204 logs.go:276] 0 containers: []
	W0213 16:04:26.817357   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:04:26.817440   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:04:26.838875   23204 logs.go:276] 0 containers: []
	W0213 16:04:26.838892   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:04:26.838962   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:04:26.870903   23204 logs.go:276] 0 containers: []
	W0213 16:04:26.870923   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:04:26.870997   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:04:26.891739   23204 logs.go:276] 0 containers: []
	W0213 16:04:26.891761   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:04:26.891774   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:04:26.891784   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:04:26.913536   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:04:26.913560   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:04:26.985585   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:04:26.985603   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:04:26.985613   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:04:27.027601   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:04:27.027631   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:04:27.105594   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:04:27.105618   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:04:29.671307   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:29.690690   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:04:29.713092   23204 logs.go:276] 0 containers: []
	W0213 16:04:29.713107   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:04:29.713181   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:04:29.732835   23204 logs.go:276] 0 containers: []
	W0213 16:04:29.732848   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:04:29.732923   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:04:29.752758   23204 logs.go:276] 0 containers: []
	W0213 16:04:29.752772   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:04:29.752841   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:04:29.770554   23204 logs.go:276] 0 containers: []
	W0213 16:04:29.770570   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:04:29.770647   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:04:29.789520   23204 logs.go:276] 0 containers: []
	W0213 16:04:29.789535   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:04:29.789616   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:04:29.807618   23204 logs.go:276] 0 containers: []
	W0213 16:04:29.807632   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:04:29.807705   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:04:29.827074   23204 logs.go:276] 0 containers: []
	W0213 16:04:29.827093   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:04:29.827211   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:04:29.847772   23204 logs.go:276] 0 containers: []
	W0213 16:04:29.847789   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:04:29.847802   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:04:29.847827   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:04:29.894698   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:04:29.894713   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:04:29.916774   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:04:29.916791   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:04:30.002495   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:04:30.002518   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:04:30.002528   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:04:30.026061   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:04:30.026077   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:04:32.593546   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:32.683784   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:04:32.703931   23204 logs.go:276] 0 containers: []
	W0213 16:04:32.703946   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:04:32.704024   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:04:32.725411   23204 logs.go:276] 0 containers: []
	W0213 16:04:32.725434   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:04:32.725525   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:04:32.745216   23204 logs.go:276] 0 containers: []
	W0213 16:04:32.745230   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:04:32.745301   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:04:32.765601   23204 logs.go:276] 0 containers: []
	W0213 16:04:32.765618   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:04:32.765704   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:04:32.785248   23204 logs.go:276] 0 containers: []
	W0213 16:04:32.785264   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:04:32.785333   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:04:32.808843   23204 logs.go:276] 0 containers: []
	W0213 16:04:32.808861   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:04:32.808941   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:04:32.830381   23204 logs.go:276] 0 containers: []
	W0213 16:04:32.830394   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:04:32.830474   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:04:32.854423   23204 logs.go:276] 0 containers: []
	W0213 16:04:32.854440   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:04:32.854450   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:04:32.854460   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:04:32.907780   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:04:32.907803   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:04:32.930355   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:04:32.930374   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:04:33.002012   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:04:33.002029   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:04:33.002036   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:04:33.027429   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:04:33.027445   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:04:35.601458   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:35.619316   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:04:35.639796   23204 logs.go:276] 0 containers: []
	W0213 16:04:35.639824   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:04:35.639936   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:04:35.660099   23204 logs.go:276] 0 containers: []
	W0213 16:04:35.660113   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:04:35.660180   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:04:35.680178   23204 logs.go:276] 0 containers: []
	W0213 16:04:35.680206   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:04:35.680272   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:04:35.699599   23204 logs.go:276] 0 containers: []
	W0213 16:04:35.699615   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:04:35.699679   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:04:35.731638   23204 logs.go:276] 0 containers: []
	W0213 16:04:35.731654   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:04:35.731771   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:04:35.754515   23204 logs.go:276] 0 containers: []
	W0213 16:04:35.754532   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:04:35.754622   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:04:35.773730   23204 logs.go:276] 0 containers: []
	W0213 16:04:35.773744   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:04:35.773800   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:04:35.792912   23204 logs.go:276] 0 containers: []
	W0213 16:04:35.792939   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:04:35.792972   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:04:35.792980   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:04:35.819019   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:04:35.819036   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:04:35.894167   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:04:35.894183   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:04:35.946922   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:04:35.946944   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:04:35.968233   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:04:35.968251   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:04:36.035901   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:04:38.536024   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:38.553654   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:04:38.572456   23204 logs.go:276] 0 containers: []
	W0213 16:04:38.572472   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:04:38.572544   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:04:38.591434   23204 logs.go:276] 0 containers: []
	W0213 16:04:38.591449   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:04:38.591526   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:04:38.610312   23204 logs.go:276] 0 containers: []
	W0213 16:04:38.610328   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:04:38.610416   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:04:38.627829   23204 logs.go:276] 0 containers: []
	W0213 16:04:38.627842   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:04:38.627896   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:04:38.647236   23204 logs.go:276] 0 containers: []
	W0213 16:04:38.647251   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:04:38.647331   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:04:38.667717   23204 logs.go:276] 0 containers: []
	W0213 16:04:38.667731   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:04:38.667801   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:04:38.687362   23204 logs.go:276] 0 containers: []
	W0213 16:04:38.687376   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:04:38.687447   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:04:38.708643   23204 logs.go:276] 0 containers: []
	W0213 16:04:38.708656   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:04:38.708666   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:04:38.708677   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:04:38.756549   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:04:38.756566   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:04:38.777790   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:04:38.777827   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:04:38.873958   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:04:38.873971   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:04:38.873979   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:04:38.899913   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:04:38.899932   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:04:41.469300   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:41.486705   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:04:41.505552   23204 logs.go:276] 0 containers: []
	W0213 16:04:41.505566   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:04:41.505642   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:04:41.525040   23204 logs.go:276] 0 containers: []
	W0213 16:04:41.525060   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:04:41.525136   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:04:41.544792   23204 logs.go:276] 0 containers: []
	W0213 16:04:41.544812   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:04:41.544893   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:04:41.568935   23204 logs.go:276] 0 containers: []
	W0213 16:04:41.568952   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:04:41.569028   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:04:41.591089   23204 logs.go:276] 0 containers: []
	W0213 16:04:41.591106   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:04:41.591177   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:04:41.612042   23204 logs.go:276] 0 containers: []
	W0213 16:04:41.612058   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:04:41.612130   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:04:41.632183   23204 logs.go:276] 0 containers: []
	W0213 16:04:41.632199   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:04:41.632280   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:04:41.654592   23204 logs.go:276] 0 containers: []
	W0213 16:04:41.654608   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:04:41.654617   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:04:41.654624   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:04:41.712666   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:04:41.712681   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:04:41.734172   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:04:41.734188   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:04:41.800452   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:04:41.800462   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:04:41.800476   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:04:41.823124   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:04:41.823145   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:04:44.393157   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:44.410439   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:04:44.429764   23204 logs.go:276] 0 containers: []
	W0213 16:04:44.429779   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:04:44.429890   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:04:44.450087   23204 logs.go:276] 0 containers: []
	W0213 16:04:44.450100   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:04:44.450172   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:04:44.466886   23204 logs.go:276] 0 containers: []
	W0213 16:04:44.466899   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:04:44.467034   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:04:44.484922   23204 logs.go:276] 0 containers: []
	W0213 16:04:44.484936   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:04:44.485001   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:04:44.503586   23204 logs.go:276] 0 containers: []
	W0213 16:04:44.503601   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:04:44.503669   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:04:44.523472   23204 logs.go:276] 0 containers: []
	W0213 16:04:44.523487   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:04:44.523581   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:04:44.545417   23204 logs.go:276] 0 containers: []
	W0213 16:04:44.545433   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:04:44.545496   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:04:44.565431   23204 logs.go:276] 0 containers: []
	W0213 16:04:44.565453   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:04:44.565466   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:04:44.565491   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:04:44.588475   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:04:44.588492   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:04:44.684792   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:04:44.684808   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:04:44.684816   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:04:44.707176   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:04:44.707191   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:04:44.779035   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:04:44.779066   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:04:47.328986   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:47.349339   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:04:47.369613   23204 logs.go:276] 0 containers: []
	W0213 16:04:47.369627   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:04:47.369691   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:04:47.388616   23204 logs.go:276] 0 containers: []
	W0213 16:04:47.388631   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:04:47.388745   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:04:47.413927   23204 logs.go:276] 0 containers: []
	W0213 16:04:47.413944   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:04:47.414020   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:04:47.437802   23204 logs.go:276] 0 containers: []
	W0213 16:04:47.437819   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:04:47.437901   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:04:47.458800   23204 logs.go:276] 0 containers: []
	W0213 16:04:47.458814   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:04:47.458891   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:04:47.477479   23204 logs.go:276] 0 containers: []
	W0213 16:04:47.477494   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:04:47.477558   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:04:47.498819   23204 logs.go:276] 0 containers: []
	W0213 16:04:47.498834   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:04:47.498907   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:04:47.520870   23204 logs.go:276] 0 containers: []
	W0213 16:04:47.520884   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:04:47.520899   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:04:47.520908   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:04:47.573588   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:04:47.573611   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:04:47.596809   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:04:47.596832   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:04:47.691034   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:04:47.691071   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:04:47.691078   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:04:47.718370   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:04:47.718387   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:04:50.296489   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:50.313667   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:04:50.332402   23204 logs.go:276] 0 containers: []
	W0213 16:04:50.332416   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:04:50.332482   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:04:50.350883   23204 logs.go:276] 0 containers: []
	W0213 16:04:50.350897   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:04:50.350966   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:04:50.368495   23204 logs.go:276] 0 containers: []
	W0213 16:04:50.368509   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:04:50.368577   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:04:50.387455   23204 logs.go:276] 0 containers: []
	W0213 16:04:50.387469   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:04:50.387549   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:04:50.408854   23204 logs.go:276] 0 containers: []
	W0213 16:04:50.408869   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:04:50.408943   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:04:50.429360   23204 logs.go:276] 0 containers: []
	W0213 16:04:50.429374   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:04:50.429444   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:04:50.449170   23204 logs.go:276] 0 containers: []
	W0213 16:04:50.449191   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:04:50.449255   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:04:50.469125   23204 logs.go:276] 0 containers: []
	W0213 16:04:50.469155   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:04:50.469162   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:04:50.469184   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:04:50.488818   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:04:50.488834   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:04:50.555894   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:04:50.555909   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:04:50.555917   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:04:50.577831   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:04:50.577845   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:04:50.649624   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:04:50.649641   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:04:53.198760   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:53.268674   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:04:53.285253   23204 logs.go:276] 0 containers: []
	W0213 16:04:53.285267   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:04:53.285335   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:04:53.303586   23204 logs.go:276] 0 containers: []
	W0213 16:04:53.303600   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:04:53.303671   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:04:53.323214   23204 logs.go:276] 0 containers: []
	W0213 16:04:53.323227   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:04:53.323305   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:04:53.343335   23204 logs.go:276] 0 containers: []
	W0213 16:04:53.343349   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:04:53.343421   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:04:53.361304   23204 logs.go:276] 0 containers: []
	W0213 16:04:53.361318   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:04:53.361388   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:04:53.379951   23204 logs.go:276] 0 containers: []
	W0213 16:04:53.379969   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:04:53.380049   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:04:53.400357   23204 logs.go:276] 0 containers: []
	W0213 16:04:53.400373   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:04:53.400437   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:04:53.419782   23204 logs.go:276] 0 containers: []
	W0213 16:04:53.419801   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:04:53.419813   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:04:53.419825   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:04:53.463219   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:04:53.463235   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:04:53.482865   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:04:53.482881   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:04:53.548646   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:04:53.548688   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:04:53.548695   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:04:53.570260   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:04:53.570274   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:04:56.137044   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:56.154904   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:04:56.175911   23204 logs.go:276] 0 containers: []
	W0213 16:04:56.175932   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:04:56.176033   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:04:56.196715   23204 logs.go:276] 0 containers: []
	W0213 16:04:56.196729   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:04:56.196827   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:04:56.215574   23204 logs.go:276] 0 containers: []
	W0213 16:04:56.215587   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:04:56.215654   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:04:56.233974   23204 logs.go:276] 0 containers: []
	W0213 16:04:56.233988   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:04:56.234057   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:04:56.252084   23204 logs.go:276] 0 containers: []
	W0213 16:04:56.252099   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:04:56.252198   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:04:56.270479   23204 logs.go:276] 0 containers: []
	W0213 16:04:56.270492   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:04:56.270562   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:04:56.288985   23204 logs.go:276] 0 containers: []
	W0213 16:04:56.289002   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:04:56.289142   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:04:56.311564   23204 logs.go:276] 0 containers: []
	W0213 16:04:56.311578   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:04:56.311592   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:04:56.311601   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:04:56.386703   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:04:56.386729   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:04:56.410413   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:04:56.410428   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:04:56.534655   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:04:56.534667   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:04:56.534674   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:04:56.555596   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:04:56.555609   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:04:59.117887   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:04:59.136047   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:04:59.155725   23204 logs.go:276] 0 containers: []
	W0213 16:04:59.155738   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:04:59.155836   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:04:59.175697   23204 logs.go:276] 0 containers: []
	W0213 16:04:59.175712   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:04:59.175786   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:04:59.197468   23204 logs.go:276] 0 containers: []
	W0213 16:04:59.197483   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:04:59.197564   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:04:59.216838   23204 logs.go:276] 0 containers: []
	W0213 16:04:59.216854   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:04:59.216938   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:04:59.236611   23204 logs.go:276] 0 containers: []
	W0213 16:04:59.236625   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:04:59.236693   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:04:59.255880   23204 logs.go:276] 0 containers: []
	W0213 16:04:59.255894   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:04:59.255959   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:04:59.273815   23204 logs.go:276] 0 containers: []
	W0213 16:04:59.273829   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:04:59.273909   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:04:59.294313   23204 logs.go:276] 0 containers: []
	W0213 16:04:59.294326   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:04:59.294333   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:04:59.294340   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:04:59.338310   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:04:59.338343   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:04:59.360461   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:04:59.360478   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:04:59.426136   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:04:59.426149   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:04:59.426157   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:04:59.448037   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:04:59.448051   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:05:02.010346   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:05:02.028130   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:05:02.047293   23204 logs.go:276] 0 containers: []
	W0213 16:05:02.047307   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:05:02.047382   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:05:02.068159   23204 logs.go:276] 0 containers: []
	W0213 16:05:02.068175   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:05:02.068271   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:05:02.091825   23204 logs.go:276] 0 containers: []
	W0213 16:05:02.091857   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:05:02.091948   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:05:02.112557   23204 logs.go:276] 0 containers: []
	W0213 16:05:02.112571   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:05:02.112637   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:05:02.133265   23204 logs.go:276] 0 containers: []
	W0213 16:05:02.133277   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:05:02.133354   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:05:02.153965   23204 logs.go:276] 0 containers: []
	W0213 16:05:02.153997   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:05:02.154118   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:05:02.179164   23204 logs.go:276] 0 containers: []
	W0213 16:05:02.179178   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:05:02.179249   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:05:02.203360   23204 logs.go:276] 0 containers: []
	W0213 16:05:02.203375   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:05:02.203382   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:05:02.203390   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:05:02.272700   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:05:02.272722   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:05:02.317315   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:05:02.317333   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:05:02.338783   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:05:02.338799   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:05:02.404623   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:05:02.404634   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:05:02.404647   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:05:04.927407   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:05:04.947031   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:05:04.970373   23204 logs.go:276] 0 containers: []
	W0213 16:05:04.970386   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:05:04.970448   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:05:04.989837   23204 logs.go:276] 0 containers: []
	W0213 16:05:04.989881   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:05:04.989983   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:05:05.017506   23204 logs.go:276] 0 containers: []
	W0213 16:05:05.017526   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:05:05.017622   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:05:05.040887   23204 logs.go:276] 0 containers: []
	W0213 16:05:05.040900   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:05:05.040972   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:05:05.061395   23204 logs.go:276] 0 containers: []
	W0213 16:05:05.061409   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:05:05.061475   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:05:05.079844   23204 logs.go:276] 0 containers: []
	W0213 16:05:05.079860   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:05:05.079947   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:05:05.103957   23204 logs.go:276] 0 containers: []
	W0213 16:05:05.103973   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:05:05.104057   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:05:05.127713   23204 logs.go:276] 0 containers: []
	W0213 16:05:05.127740   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:05:05.127762   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:05:05.127769   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:05:05.172960   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:05:05.172976   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:05:05.200404   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:05:05.200463   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:05:05.273321   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:05:05.273383   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:05:05.273411   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:05:05.301878   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:05:05.301899   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:05:07.877850   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:05:07.895467   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:05:07.915731   23204 logs.go:276] 0 containers: []
	W0213 16:05:07.915749   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:05:07.915830   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:05:07.935032   23204 logs.go:276] 0 containers: []
	W0213 16:05:07.935046   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:05:07.935119   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:05:07.953994   23204 logs.go:276] 0 containers: []
	W0213 16:05:07.954008   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:05:07.954070   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:05:07.973026   23204 logs.go:276] 0 containers: []
	W0213 16:05:07.973039   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:05:07.973112   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:05:07.991279   23204 logs.go:276] 0 containers: []
	W0213 16:05:07.991294   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:05:07.991384   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:05:08.011225   23204 logs.go:276] 0 containers: []
	W0213 16:05:08.011239   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:05:08.011302   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:05:08.030933   23204 logs.go:276] 0 containers: []
	W0213 16:05:08.030947   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:05:08.031025   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:05:08.050612   23204 logs.go:276] 0 containers: []
	W0213 16:05:08.050625   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:05:08.050635   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:05:08.050643   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:05:08.118236   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:05:08.118251   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:05:08.160899   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:05:08.160914   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:05:08.180978   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:05:08.180995   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:05:08.248283   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:05:08.248295   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:05:08.248308   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:05:10.771334   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:05:10.788464   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:05:10.806476   23204 logs.go:276] 0 containers: []
	W0213 16:05:10.806490   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:05:10.806564   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:05:10.825207   23204 logs.go:276] 0 containers: []
	W0213 16:05:10.825221   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:05:10.825278   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:05:10.843784   23204 logs.go:276] 0 containers: []
	W0213 16:05:10.843799   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:05:10.843861   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:05:10.861886   23204 logs.go:276] 0 containers: []
	W0213 16:05:10.861900   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:05:10.861966   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:05:10.880958   23204 logs.go:276] 0 containers: []
	W0213 16:05:10.880972   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:05:10.881040   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:05:10.899913   23204 logs.go:276] 0 containers: []
	W0213 16:05:10.899928   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:05:10.899993   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:05:10.920683   23204 logs.go:276] 0 containers: []
	W0213 16:05:10.920700   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:05:10.920771   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:05:10.941199   23204 logs.go:276] 0 containers: []
	W0213 16:05:10.941214   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:05:10.941222   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:05:10.941236   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:05:11.006277   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:05:11.006291   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:05:11.049857   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:05:11.049873   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:05:11.070055   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:05:11.070087   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:05:11.135429   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:05:11.135440   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:05:11.135448   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:05:13.657700   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:05:13.674160   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:05:13.695599   23204 logs.go:276] 0 containers: []
	W0213 16:05:13.695615   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:05:13.695712   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:05:13.713032   23204 logs.go:276] 0 containers: []
	W0213 16:05:13.713047   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:05:13.713128   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:05:13.732192   23204 logs.go:276] 0 containers: []
	W0213 16:05:13.732207   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:05:13.732286   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:05:13.751297   23204 logs.go:276] 0 containers: []
	W0213 16:05:13.751310   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:05:13.751384   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:05:13.770292   23204 logs.go:276] 0 containers: []
	W0213 16:05:13.770305   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:05:13.770375   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:05:13.790952   23204 logs.go:276] 0 containers: []
	W0213 16:05:13.790965   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:05:13.791031   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:05:13.810040   23204 logs.go:276] 0 containers: []
	W0213 16:05:13.810054   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:05:13.810123   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:05:13.830465   23204 logs.go:276] 0 containers: []
	W0213 16:05:13.830479   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:05:13.830486   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:05:13.830493   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:05:13.850303   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:05:13.850321   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:05:13.926101   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:05:13.926128   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:05:13.926152   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:05:13.947793   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:05:13.947808   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:05:14.012061   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:05:14.012076   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:05:16.554739   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:05:16.573304   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:05:16.594089   23204 logs.go:276] 0 containers: []
	W0213 16:05:16.594106   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:05:16.594183   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:05:16.615853   23204 logs.go:276] 0 containers: []
	W0213 16:05:16.615868   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:05:16.615941   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:05:16.637063   23204 logs.go:276] 0 containers: []
	W0213 16:05:16.637078   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:05:16.637157   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:05:16.656390   23204 logs.go:276] 0 containers: []
	W0213 16:05:16.656404   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:05:16.656469   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:05:16.675766   23204 logs.go:276] 0 containers: []
	W0213 16:05:16.675780   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:05:16.675848   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:05:16.694192   23204 logs.go:276] 0 containers: []
	W0213 16:05:16.694207   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:05:16.694285   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:05:16.715635   23204 logs.go:276] 0 containers: []
	W0213 16:05:16.715648   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:05:16.715723   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:05:16.734833   23204 logs.go:276] 0 containers: []
	W0213 16:05:16.734848   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:05:16.734855   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:05:16.734862   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:05:16.803984   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:05:16.803997   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:05:16.804004   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:05:16.826693   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:05:16.826712   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:05:16.893407   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:05:16.893423   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:05:16.938921   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:05:16.938941   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:05:19.460872   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:05:19.479423   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:05:19.499202   23204 logs.go:276] 0 containers: []
	W0213 16:05:19.499222   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:05:19.499316   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:05:19.519493   23204 logs.go:276] 0 containers: []
	W0213 16:05:19.519506   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:05:19.519579   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:05:19.539759   23204 logs.go:276] 0 containers: []
	W0213 16:05:19.539773   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:05:19.539839   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:05:19.563248   23204 logs.go:276] 0 containers: []
	W0213 16:05:19.563263   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:05:19.563340   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:05:19.585945   23204 logs.go:276] 0 containers: []
	W0213 16:05:19.585959   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:05:19.586023   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:05:19.610019   23204 logs.go:276] 0 containers: []
	W0213 16:05:19.610034   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:05:19.610106   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:05:19.629618   23204 logs.go:276] 0 containers: []
	W0213 16:05:19.629634   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:05:19.629703   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:05:19.670911   23204 logs.go:276] 0 containers: []
	W0213 16:05:19.670924   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:05:19.670932   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:05:19.670940   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:05:19.713770   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:05:19.713787   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:05:19.734132   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:05:19.734146   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:05:19.811294   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:05:19.811320   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:05:19.811345   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:05:19.834681   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:05:19.834696   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:05:22.399587   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:05:22.417950   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:05:22.436307   23204 logs.go:276] 0 containers: []
	W0213 16:05:22.436322   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:05:22.436396   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:05:22.454902   23204 logs.go:276] 0 containers: []
	W0213 16:05:22.454929   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:05:22.454999   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:05:22.474846   23204 logs.go:276] 0 containers: []
	W0213 16:05:22.474860   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:05:22.474932   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:05:22.494313   23204 logs.go:276] 0 containers: []
	W0213 16:05:22.494328   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:05:22.494402   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:05:22.514668   23204 logs.go:276] 0 containers: []
	W0213 16:05:22.514682   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:05:22.514745   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:05:22.534460   23204 logs.go:276] 0 containers: []
	W0213 16:05:22.534473   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:05:22.534545   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:05:22.554151   23204 logs.go:276] 0 containers: []
	W0213 16:05:22.554164   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:05:22.554236   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:05:22.573871   23204 logs.go:276] 0 containers: []
	W0213 16:05:22.573885   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:05:22.573893   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:05:22.573901   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:05:22.618764   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:05:22.618779   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:05:22.639156   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:05:22.639173   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:05:22.706078   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:05:22.706093   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:05:22.706100   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:05:22.727309   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:05:22.727324   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:05:25.296924   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:05:25.313990   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:05:25.332650   23204 logs.go:276] 0 containers: []
	W0213 16:05:25.332663   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:05:25.332731   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:05:25.351370   23204 logs.go:276] 0 containers: []
	W0213 16:05:25.351384   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:05:25.351455   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:05:25.371212   23204 logs.go:276] 0 containers: []
	W0213 16:05:25.371230   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:05:25.371302   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:05:25.390637   23204 logs.go:276] 0 containers: []
	W0213 16:05:25.390651   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:05:25.390720   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:05:25.408725   23204 logs.go:276] 0 containers: []
	W0213 16:05:25.408738   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:05:25.408806   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:05:25.427504   23204 logs.go:276] 0 containers: []
	W0213 16:05:25.427518   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:05:25.427587   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:05:25.446462   23204 logs.go:276] 0 containers: []
	W0213 16:05:25.446477   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:05:25.446547   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:05:25.466462   23204 logs.go:276] 0 containers: []
	W0213 16:05:25.466477   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:05:25.466484   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:05:25.466491   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:05:25.510494   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:05:25.510512   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:05:25.530947   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:05:25.530964   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:05:25.605859   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:05:25.605875   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:05:25.605886   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:05:25.628259   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:05:25.628279   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:05:28.195812   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:05:28.212880   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:05:28.232335   23204 logs.go:276] 0 containers: []
	W0213 16:05:28.232350   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:05:28.232413   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:05:28.252575   23204 logs.go:276] 0 containers: []
	W0213 16:05:28.252588   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:05:28.252654   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:05:28.271591   23204 logs.go:276] 0 containers: []
	W0213 16:05:28.271605   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:05:28.271701   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:05:28.289743   23204 logs.go:276] 0 containers: []
	W0213 16:05:28.289757   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:05:28.289829   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:05:28.307771   23204 logs.go:276] 0 containers: []
	W0213 16:05:28.307783   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:05:28.307851   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:05:28.326519   23204 logs.go:276] 0 containers: []
	W0213 16:05:28.326533   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:05:28.326601   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:05:28.345019   23204 logs.go:276] 0 containers: []
	W0213 16:05:28.345034   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:05:28.345103   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:05:28.365751   23204 logs.go:276] 0 containers: []
	W0213 16:05:28.365766   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:05:28.365774   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:05:28.365780   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:05:28.408560   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:05:28.408576   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:05:28.428870   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:05:28.428886   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:05:28.502375   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:05:28.502403   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:05:28.502413   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:05:28.523958   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:05:28.523974   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:05:31.101216   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:05:31.118164   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:05:31.137437   23204 logs.go:276] 0 containers: []
	W0213 16:05:31.137450   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:05:31.137513   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:05:31.156485   23204 logs.go:276] 0 containers: []
	W0213 16:05:31.156499   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:05:31.156564   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:05:31.174755   23204 logs.go:276] 0 containers: []
	W0213 16:05:31.174769   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:05:31.174839   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:05:31.193607   23204 logs.go:276] 0 containers: []
	W0213 16:05:31.193620   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:05:31.193685   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:05:31.212813   23204 logs.go:276] 0 containers: []
	W0213 16:05:31.212827   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:05:31.212896   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:05:31.232071   23204 logs.go:276] 0 containers: []
	W0213 16:05:31.232085   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:05:31.232152   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:05:31.250516   23204 logs.go:276] 0 containers: []
	W0213 16:05:31.250528   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:05:31.250597   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:05:31.270494   23204 logs.go:276] 0 containers: []
	W0213 16:05:31.270526   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:05:31.270539   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:05:31.270547   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:05:31.314115   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:05:31.314131   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:05:31.335908   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:05:31.335923   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:05:31.401788   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:05:31.401800   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:05:31.401808   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:05:31.423149   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:05:31.423164   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:05:33.991453   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:05:34.008738   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:05:34.028229   23204 logs.go:276] 0 containers: []
	W0213 16:05:34.028244   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:05:34.028311   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:05:34.047784   23204 logs.go:276] 0 containers: []
	W0213 16:05:34.047798   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:05:34.047874   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:05:34.067568   23204 logs.go:276] 0 containers: []
	W0213 16:05:34.067582   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:05:34.067651   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:05:34.086870   23204 logs.go:276] 0 containers: []
	W0213 16:05:34.086884   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:05:34.086951   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:05:34.105845   23204 logs.go:276] 0 containers: []
	W0213 16:05:34.105859   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:05:34.105922   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:05:34.125409   23204 logs.go:276] 0 containers: []
	W0213 16:05:34.125423   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:05:34.125490   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:05:34.144736   23204 logs.go:276] 0 containers: []
	W0213 16:05:34.144750   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:05:34.144813   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:05:34.163370   23204 logs.go:276] 0 containers: []
	W0213 16:05:34.163383   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:05:34.163390   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:05:34.163397   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:05:34.208646   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:05:34.208667   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:05:34.228874   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:05:34.228909   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:05:34.295626   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:05:34.295636   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:05:34.295643   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:05:34.316956   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:05:34.316972   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:05:36.883156   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:05:36.900420   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:05:36.919449   23204 logs.go:276] 0 containers: []
	W0213 16:05:36.919464   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:05:36.919528   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:05:36.938402   23204 logs.go:276] 0 containers: []
	W0213 16:05:36.938416   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:05:36.938482   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:05:36.957946   23204 logs.go:276] 0 containers: []
	W0213 16:05:36.957960   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:05:36.958028   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:05:36.977341   23204 logs.go:276] 0 containers: []
	W0213 16:05:36.977354   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:05:36.977428   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:05:36.996337   23204 logs.go:276] 0 containers: []
	W0213 16:05:36.996351   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:05:36.996420   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:05:37.015460   23204 logs.go:276] 0 containers: []
	W0213 16:05:37.015474   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:05:37.015540   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:05:37.035536   23204 logs.go:276] 0 containers: []
	W0213 16:05:37.035549   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:05:37.035619   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:05:37.056562   23204 logs.go:276] 0 containers: []
	W0213 16:05:37.056576   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:05:37.056586   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:05:37.056617   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:05:37.078001   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:05:37.078022   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:05:37.196912   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:05:37.196924   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:05:37.196932   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:05:37.218366   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:05:37.218381   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:05:37.285940   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:05:37.285954   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:05:39.831258   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:05:39.852987   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:05:39.875051   23204 logs.go:276] 0 containers: []
	W0213 16:05:39.875064   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:05:39.875139   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:05:39.897981   23204 logs.go:276] 0 containers: []
	W0213 16:05:39.898033   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:05:39.898114   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:05:39.917416   23204 logs.go:276] 0 containers: []
	W0213 16:05:39.917429   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:05:39.917495   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:05:39.937024   23204 logs.go:276] 0 containers: []
	W0213 16:05:39.937038   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:05:39.937111   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:05:39.955663   23204 logs.go:276] 0 containers: []
	W0213 16:05:39.955676   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:05:39.955737   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:05:39.975148   23204 logs.go:276] 0 containers: []
	W0213 16:05:39.975160   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:05:39.975225   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:05:39.995173   23204 logs.go:276] 0 containers: []
	W0213 16:05:39.995187   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:05:39.995255   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:05:40.014196   23204 logs.go:276] 0 containers: []
	W0213 16:05:40.014209   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:05:40.014217   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:05:40.014224   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:05:40.080739   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:05:40.080753   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:05:40.125706   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:05:40.125724   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:05:40.146301   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:05:40.146319   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:05:40.215518   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:05:40.215531   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:05:40.215539   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:05:42.739378   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:05:42.759803   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:05:42.778190   23204 logs.go:276] 0 containers: []
	W0213 16:05:42.778203   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:05:42.778275   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:05:42.796237   23204 logs.go:276] 0 containers: []
	W0213 16:05:42.796256   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:05:42.796322   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:05:42.817499   23204 logs.go:276] 0 containers: []
	W0213 16:05:42.817513   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:05:42.817581   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:05:42.840014   23204 logs.go:276] 0 containers: []
	W0213 16:05:42.840032   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:05:42.840101   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:05:42.869223   23204 logs.go:276] 0 containers: []
	W0213 16:05:42.869238   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:05:42.869307   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:05:42.890434   23204 logs.go:276] 0 containers: []
	W0213 16:05:42.890449   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:05:42.890518   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:05:42.909933   23204 logs.go:276] 0 containers: []
	W0213 16:05:42.909946   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:05:42.910010   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:05:42.929484   23204 logs.go:276] 0 containers: []
	W0213 16:05:42.929497   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:05:42.929504   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:05:42.929512   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:05:42.950937   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:05:42.950967   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:05:43.019026   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:05:43.019049   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:05:43.062857   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:05:43.062873   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:05:43.084258   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:05:43.084272   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:05:43.149625   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:05:45.650193   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:05:45.667762   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:05:45.687688   23204 logs.go:276] 0 containers: []
	W0213 16:05:45.687702   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:05:45.687766   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:05:45.707336   23204 logs.go:276] 0 containers: []
	W0213 16:05:45.707351   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:05:45.707416   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:05:45.726843   23204 logs.go:276] 0 containers: []
	W0213 16:05:45.726863   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:05:45.726945   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:05:45.747916   23204 logs.go:276] 0 containers: []
	W0213 16:05:45.747930   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:05:45.748001   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:05:45.767017   23204 logs.go:276] 0 containers: []
	W0213 16:05:45.767031   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:05:45.767092   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:05:45.787332   23204 logs.go:276] 0 containers: []
	W0213 16:05:45.787347   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:05:45.787413   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:05:45.807131   23204 logs.go:276] 0 containers: []
	W0213 16:05:45.807145   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:05:45.807209   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:05:45.826571   23204 logs.go:276] 0 containers: []
	W0213 16:05:45.826585   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:05:45.826592   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:05:45.826604   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:05:45.892851   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:05:45.892873   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:05:45.936617   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:05:45.936636   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:05:45.956971   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:05:45.956989   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:05:46.028681   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:05:46.028693   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:05:46.028700   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:05:48.552923   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:05:48.570737   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:05:48.590474   23204 logs.go:276] 0 containers: []
	W0213 16:05:48.590488   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:05:48.590555   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:05:48.611456   23204 logs.go:276] 0 containers: []
	W0213 16:05:48.611475   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:05:48.611543   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:05:48.631289   23204 logs.go:276] 0 containers: []
	W0213 16:05:48.631304   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:05:48.631376   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:05:48.650690   23204 logs.go:276] 0 containers: []
	W0213 16:05:48.650702   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:05:48.650771   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:05:48.670425   23204 logs.go:276] 0 containers: []
	W0213 16:05:48.670438   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:05:48.670522   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:05:48.690469   23204 logs.go:276] 0 containers: []
	W0213 16:05:48.690486   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:05:48.690551   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:05:48.709955   23204 logs.go:276] 0 containers: []
	W0213 16:05:48.709971   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:05:48.710047   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:05:48.730750   23204 logs.go:276] 0 containers: []
	W0213 16:05:48.730764   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:05:48.730773   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:05:48.730781   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:05:48.773702   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:05:48.773718   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:05:48.794779   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:05:48.794799   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:05:48.867201   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:05:48.867213   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:05:48.867221   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:05:48.890240   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:05:48.890255   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:05:51.458039   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:05:51.477538   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:05:51.497634   23204 logs.go:276] 0 containers: []
	W0213 16:05:51.497650   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:05:51.497735   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:05:51.515848   23204 logs.go:276] 0 containers: []
	W0213 16:05:51.515865   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:05:51.515936   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:05:51.536059   23204 logs.go:276] 0 containers: []
	W0213 16:05:51.536072   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:05:51.536156   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:05:51.555370   23204 logs.go:276] 0 containers: []
	W0213 16:05:51.555384   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:05:51.555462   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:05:51.574569   23204 logs.go:276] 0 containers: []
	W0213 16:05:51.574583   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:05:51.574651   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:05:51.593080   23204 logs.go:276] 0 containers: []
	W0213 16:05:51.593092   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:05:51.593159   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:05:51.611557   23204 logs.go:276] 0 containers: []
	W0213 16:05:51.611571   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:05:51.611635   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:05:51.633101   23204 logs.go:276] 0 containers: []
	W0213 16:05:51.633116   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:05:51.633124   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:05:51.633131   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:05:51.708452   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:05:51.708483   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:05:51.753986   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:05:51.754002   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:05:51.775460   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:05:51.775475   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:05:51.853807   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:05:51.853823   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:05:51.853834   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:05:54.375890   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:05:54.393556   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:05:54.413952   23204 logs.go:276] 0 containers: []
	W0213 16:05:54.413965   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:05:54.414028   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:05:54.433670   23204 logs.go:276] 0 containers: []
	W0213 16:05:54.433687   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:05:54.433762   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:05:54.452891   23204 logs.go:276] 0 containers: []
	W0213 16:05:54.452904   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:05:54.452970   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:05:54.473452   23204 logs.go:276] 0 containers: []
	W0213 16:05:54.473465   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:05:54.473532   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:05:54.492253   23204 logs.go:276] 0 containers: []
	W0213 16:05:54.492266   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:05:54.492348   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:05:54.511748   23204 logs.go:276] 0 containers: []
	W0213 16:05:54.511762   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:05:54.511836   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:05:54.531833   23204 logs.go:276] 0 containers: []
	W0213 16:05:54.531848   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:05:54.531916   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:05:54.551409   23204 logs.go:276] 0 containers: []
	W0213 16:05:54.551423   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:05:54.551431   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:05:54.551438   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:05:54.597166   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:05:54.597182   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:05:54.617438   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:05:54.617454   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:05:54.684500   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:05:54.684516   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:05:54.684525   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:05:54.707142   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:05:54.707157   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:05:57.275000   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:05:57.292428   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:05:57.311389   23204 logs.go:276] 0 containers: []
	W0213 16:05:57.311400   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:05:57.311478   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:05:57.330231   23204 logs.go:276] 0 containers: []
	W0213 16:05:57.330246   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:05:57.330321   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:05:57.348819   23204 logs.go:276] 0 containers: []
	W0213 16:05:57.348833   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:05:57.348899   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:05:57.368310   23204 logs.go:276] 0 containers: []
	W0213 16:05:57.368325   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:05:57.368394   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:05:57.389120   23204 logs.go:276] 0 containers: []
	W0213 16:05:57.389135   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:05:57.389204   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:05:57.408115   23204 logs.go:276] 0 containers: []
	W0213 16:05:57.408129   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:05:57.408193   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:05:57.427546   23204 logs.go:276] 0 containers: []
	W0213 16:05:57.427580   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:05:57.427655   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:05:57.446087   23204 logs.go:276] 0 containers: []
	W0213 16:05:57.446101   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:05:57.446109   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:05:57.446122   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:05:57.511202   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:05:57.511217   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:05:57.553305   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:05:57.553320   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:05:57.575227   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:05:57.575245   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:05:57.645968   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:05:57.645980   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:05:57.645987   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:00.167764   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:00.186046   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:00.210050   23204 logs.go:276] 0 containers: []
	W0213 16:06:00.210068   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:00.210144   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:00.233705   23204 logs.go:276] 0 containers: []
	W0213 16:06:00.239718   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:00.239793   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:00.261376   23204 logs.go:276] 0 containers: []
	W0213 16:06:00.261393   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:00.261465   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:00.280953   23204 logs.go:276] 0 containers: []
	W0213 16:06:00.280967   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:00.281034   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:00.302583   23204 logs.go:276] 0 containers: []
	W0213 16:06:00.302618   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:00.302689   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:00.322877   23204 logs.go:276] 0 containers: []
	W0213 16:06:00.322891   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:00.322953   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:00.346566   23204 logs.go:276] 0 containers: []
	W0213 16:06:00.346581   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:00.346674   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:00.374884   23204 logs.go:276] 0 containers: []
	W0213 16:06:00.374898   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:00.374908   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:00.374916   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:00.474438   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:00.474455   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:00.530543   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:00.530561   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:00.557224   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:00.557242   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:00.674560   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:00.674610   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:00.674621   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:03.206168   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:03.224353   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:03.242364   23204 logs.go:276] 0 containers: []
	W0213 16:06:03.242379   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:03.242450   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:03.260895   23204 logs.go:276] 0 containers: []
	W0213 16:06:03.260908   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:03.260973   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:03.278256   23204 logs.go:276] 0 containers: []
	W0213 16:06:03.278271   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:03.278343   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:03.296154   23204 logs.go:276] 0 containers: []
	W0213 16:06:03.296170   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:03.296230   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:03.314525   23204 logs.go:276] 0 containers: []
	W0213 16:06:03.314538   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:03.314605   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:03.333251   23204 logs.go:276] 0 containers: []
	W0213 16:06:03.333264   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:03.333327   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:03.352526   23204 logs.go:276] 0 containers: []
	W0213 16:06:03.352544   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:03.352652   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:03.371185   23204 logs.go:276] 0 containers: []
	W0213 16:06:03.371198   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:03.371206   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:03.371213   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:03.390902   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:03.390918   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:03.469150   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:03.469162   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:03.469169   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:03.490389   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:03.490405   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:03.553010   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:03.553026   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:06.098395   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:06.114581   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:06.134299   23204 logs.go:276] 0 containers: []
	W0213 16:06:06.134317   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:06.134380   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:06.152904   23204 logs.go:276] 0 containers: []
	W0213 16:06:06.152917   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:06.152989   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:06.171440   23204 logs.go:276] 0 containers: []
	W0213 16:06:06.171456   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:06.171528   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:06.189209   23204 logs.go:276] 0 containers: []
	W0213 16:06:06.189222   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:06.189290   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:06.209507   23204 logs.go:276] 0 containers: []
	W0213 16:06:06.209521   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:06.209599   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:06.226699   23204 logs.go:276] 0 containers: []
	W0213 16:06:06.226715   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:06.226782   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:06.245231   23204 logs.go:276] 0 containers: []
	W0213 16:06:06.245245   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:06.245306   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:06.263482   23204 logs.go:276] 0 containers: []
	W0213 16:06:06.263497   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:06.263505   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:06.263512   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:06.308714   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:06.308729   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:06.329218   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:06.329233   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:06.397168   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:06.397180   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:06.397188   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:06.418753   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:06.418766   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:08.979655   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:08.996220   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:09.013326   23204 logs.go:276] 0 containers: []
	W0213 16:06:09.013347   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:09.013422   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:09.031075   23204 logs.go:276] 0 containers: []
	W0213 16:06:09.031089   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:09.031161   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:09.048073   23204 logs.go:276] 0 containers: []
	W0213 16:06:09.048086   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:09.048166   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:09.067449   23204 logs.go:276] 0 containers: []
	W0213 16:06:09.067463   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:09.067530   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:09.088677   23204 logs.go:276] 0 containers: []
	W0213 16:06:09.088696   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:09.088812   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:09.109333   23204 logs.go:276] 0 containers: []
	W0213 16:06:09.109347   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:09.109435   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:09.127852   23204 logs.go:276] 0 containers: []
	W0213 16:06:09.127867   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:09.127934   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:09.161300   23204 logs.go:276] 0 containers: []
	W0213 16:06:09.161317   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:09.161326   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:09.161346   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:09.203447   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:09.203462   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:09.223997   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:09.224012   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:09.293458   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:09.293471   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:09.293478   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:09.315399   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:09.315415   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:11.878364   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:11.899479   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:11.921300   23204 logs.go:276] 0 containers: []
	W0213 16:06:11.921311   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:11.921372   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:11.940349   23204 logs.go:276] 0 containers: []
	W0213 16:06:11.940365   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:11.940433   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:11.958295   23204 logs.go:276] 0 containers: []
	W0213 16:06:11.958308   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:11.958374   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:11.978133   23204 logs.go:276] 0 containers: []
	W0213 16:06:11.978147   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:11.978212   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:11.996930   23204 logs.go:276] 0 containers: []
	W0213 16:06:11.996949   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:11.997009   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:12.018116   23204 logs.go:276] 0 containers: []
	W0213 16:06:12.018129   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:12.018204   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:12.038622   23204 logs.go:276] 0 containers: []
	W0213 16:06:12.038636   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:12.038695   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:12.073503   23204 logs.go:276] 0 containers: []
	W0213 16:06:12.073523   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:12.073534   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:12.073545   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:12.128415   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:12.128435   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:12.151550   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:12.151571   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:12.232156   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:12.232167   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:12.232175   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:12.256305   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:12.256321   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:14.823991   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:14.840749   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:14.857498   23204 logs.go:276] 0 containers: []
	W0213 16:06:14.857521   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:14.857596   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:14.875478   23204 logs.go:276] 0 containers: []
	W0213 16:06:14.875492   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:14.875559   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:14.892613   23204 logs.go:276] 0 containers: []
	W0213 16:06:14.892629   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:14.892699   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:14.910463   23204 logs.go:276] 0 containers: []
	W0213 16:06:14.910477   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:14.910539   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:14.928500   23204 logs.go:276] 0 containers: []
	W0213 16:06:14.928514   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:14.928578   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:14.946365   23204 logs.go:276] 0 containers: []
	W0213 16:06:14.946381   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:14.946468   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:14.963486   23204 logs.go:276] 0 containers: []
	W0213 16:06:14.963500   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:14.963565   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:14.981360   23204 logs.go:276] 0 containers: []
	W0213 16:06:14.981374   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:14.981381   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:14.981389   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:15.039828   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:15.039856   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:15.039865   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:15.063337   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:15.063352   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:15.179124   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:15.179140   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:15.223147   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:15.244191   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:17.764785   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:17.785042   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:17.804809   23204 logs.go:276] 0 containers: []
	W0213 16:06:17.804821   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:17.804889   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:17.822803   23204 logs.go:276] 0 containers: []
	W0213 16:06:17.822817   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:17.822885   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:17.842934   23204 logs.go:276] 0 containers: []
	W0213 16:06:17.842948   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:17.843015   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:17.860834   23204 logs.go:276] 0 containers: []
	W0213 16:06:17.860847   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:17.860910   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:17.881108   23204 logs.go:276] 0 containers: []
	W0213 16:06:17.881125   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:17.881210   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:17.901797   23204 logs.go:276] 0 containers: []
	W0213 16:06:17.901810   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:17.901879   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:17.922651   23204 logs.go:276] 0 containers: []
	W0213 16:06:17.922665   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:17.922795   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:17.942346   23204 logs.go:276] 0 containers: []
	W0213 16:06:17.942360   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:17.942369   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:17.942381   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:17.992362   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:17.992387   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:18.016279   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:18.016322   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:18.167950   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:18.167962   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:18.167983   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:18.190970   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:18.190986   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:20.757937   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:20.775401   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:20.794812   23204 logs.go:276] 0 containers: []
	W0213 16:06:20.794828   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:20.794899   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:20.817500   23204 logs.go:276] 0 containers: []
	W0213 16:06:20.817518   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:20.817597   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:20.865196   23204 logs.go:276] 0 containers: []
	W0213 16:06:20.865210   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:20.865268   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:20.885292   23204 logs.go:276] 0 containers: []
	W0213 16:06:20.885308   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:20.885389   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:20.907484   23204 logs.go:276] 0 containers: []
	W0213 16:06:20.907498   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:20.907567   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:20.928848   23204 logs.go:276] 0 containers: []
	W0213 16:06:20.928862   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:20.928944   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:20.948712   23204 logs.go:276] 0 containers: []
	W0213 16:06:20.948726   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:20.948794   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:20.972289   23204 logs.go:276] 0 containers: []
	W0213 16:06:20.972302   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:20.972313   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:20.972324   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:21.021004   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:21.021031   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:21.042321   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:21.042337   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:21.118911   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:21.118922   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:21.118930   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:21.141618   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:21.141634   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:23.710574   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:23.728656   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:23.747854   23204 logs.go:276] 0 containers: []
	W0213 16:06:23.747867   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:23.747958   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:23.765397   23204 logs.go:276] 0 containers: []
	W0213 16:06:23.765411   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:23.765475   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:23.784188   23204 logs.go:276] 0 containers: []
	W0213 16:06:23.784200   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:23.784265   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:23.803280   23204 logs.go:276] 0 containers: []
	W0213 16:06:23.803292   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:23.803357   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:23.822599   23204 logs.go:276] 0 containers: []
	W0213 16:06:23.822613   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:23.822679   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:23.840718   23204 logs.go:276] 0 containers: []
	W0213 16:06:23.840732   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:23.840797   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:23.860100   23204 logs.go:276] 0 containers: []
	W0213 16:06:23.860114   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:23.860237   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:23.879133   23204 logs.go:276] 0 containers: []
	W0213 16:06:23.879148   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:23.879155   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:23.879161   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:23.926130   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:23.926147   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:23.945951   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:23.946010   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:24.013479   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:24.013491   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:24.013519   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:24.036470   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:24.036487   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:26.606918   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:26.625318   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:26.643082   23204 logs.go:276] 0 containers: []
	W0213 16:06:26.643096   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:26.643172   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:26.662047   23204 logs.go:276] 0 containers: []
	W0213 16:06:26.662061   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:26.662137   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:26.681290   23204 logs.go:276] 0 containers: []
	W0213 16:06:26.681303   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:26.681367   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:26.699193   23204 logs.go:276] 0 containers: []
	W0213 16:06:26.699207   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:26.699292   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:26.717664   23204 logs.go:276] 0 containers: []
	W0213 16:06:26.717678   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:26.717743   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:26.733881   23204 logs.go:276] 0 containers: []
	W0213 16:06:26.733904   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:26.733984   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:26.752332   23204 logs.go:276] 0 containers: []
	W0213 16:06:26.752347   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:26.752413   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:26.771883   23204 logs.go:276] 0 containers: []
	W0213 16:06:26.771898   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:26.771906   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:26.771927   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:26.816963   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:26.816975   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:26.837223   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:26.837269   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:26.900361   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:26.900376   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:26.900384   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:26.923407   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:26.923422   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:29.486289   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:29.504817   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:29.524064   23204 logs.go:276] 0 containers: []
	W0213 16:06:29.524079   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:29.524146   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:29.542407   23204 logs.go:276] 0 containers: []
	W0213 16:06:29.542421   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:29.542489   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:29.564073   23204 logs.go:276] 0 containers: []
	W0213 16:06:29.564090   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:29.564169   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:29.584218   23204 logs.go:276] 0 containers: []
	W0213 16:06:29.584231   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:29.584295   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:29.603097   23204 logs.go:276] 0 containers: []
	W0213 16:06:29.603113   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:29.603189   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:29.621373   23204 logs.go:276] 0 containers: []
	W0213 16:06:29.621389   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:29.621456   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:29.641120   23204 logs.go:276] 0 containers: []
	W0213 16:06:29.641140   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:29.641214   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:29.659507   23204 logs.go:276] 0 containers: []
	W0213 16:06:29.659519   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:29.659527   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:29.659541   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:29.704150   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:29.704167   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:29.724757   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:29.724793   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:29.794272   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:29.794283   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:29.794291   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:29.816362   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:29.816377   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:32.380949   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:32.398390   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:32.419831   23204 logs.go:276] 0 containers: []
	W0213 16:06:32.419849   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:32.419932   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:32.440511   23204 logs.go:276] 0 containers: []
	W0213 16:06:32.440529   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:32.440645   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:32.459350   23204 logs.go:276] 0 containers: []
	W0213 16:06:32.459367   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:32.459476   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:32.479152   23204 logs.go:276] 0 containers: []
	W0213 16:06:32.479168   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:32.479257   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:32.500470   23204 logs.go:276] 0 containers: []
	W0213 16:06:32.500486   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:32.500594   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:32.523244   23204 logs.go:276] 0 containers: []
	W0213 16:06:32.523267   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:32.523368   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:32.548253   23204 logs.go:276] 0 containers: []
	W0213 16:06:32.548272   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:32.548376   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:32.573899   23204 logs.go:276] 0 containers: []
	W0213 16:06:32.573918   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:32.573933   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:32.573946   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:32.638927   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:32.638947   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:32.660545   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:32.660629   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:32.737721   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:32.737737   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:32.737746   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:32.764410   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:32.764429   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:35.335925   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:35.354753   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:35.377553   23204 logs.go:276] 0 containers: []
	W0213 16:06:35.377569   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:35.377663   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:35.400589   23204 logs.go:276] 0 containers: []
	W0213 16:06:35.400606   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:35.400685   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:35.422229   23204 logs.go:276] 0 containers: []
	W0213 16:06:35.422246   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:35.422317   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:35.443969   23204 logs.go:276] 0 containers: []
	W0213 16:06:35.443983   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:35.444055   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:35.467060   23204 logs.go:276] 0 containers: []
	W0213 16:06:35.467076   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:35.467145   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:35.488796   23204 logs.go:276] 0 containers: []
	W0213 16:06:35.488814   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:35.488910   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:35.510311   23204 logs.go:276] 0 containers: []
	W0213 16:06:35.510326   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:35.510405   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:35.535580   23204 logs.go:276] 0 containers: []
	W0213 16:06:35.535597   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:35.535605   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:35.535612   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:35.583314   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:35.583334   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:35.606200   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:35.606217   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:35.681475   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:35.681486   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:35.681499   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:35.705786   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:35.705815   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:38.288536   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:38.308492   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:38.331473   23204 logs.go:276] 0 containers: []
	W0213 16:06:38.331525   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:38.331615   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:38.355412   23204 logs.go:276] 0 containers: []
	W0213 16:06:38.355428   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:38.355507   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:38.380139   23204 logs.go:276] 0 containers: []
	W0213 16:06:38.380179   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:38.380284   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:38.404699   23204 logs.go:276] 0 containers: []
	W0213 16:06:38.404727   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:38.404830   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:38.430046   23204 logs.go:276] 0 containers: []
	W0213 16:06:38.430062   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:38.430131   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:38.451994   23204 logs.go:276] 0 containers: []
	W0213 16:06:38.452011   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:38.452085   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:38.475507   23204 logs.go:276] 0 containers: []
	W0213 16:06:38.475523   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:38.475605   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:38.496331   23204 logs.go:276] 0 containers: []
	W0213 16:06:38.496346   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:38.496354   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:38.496360   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:38.545370   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:38.545391   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:38.567965   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:38.567981   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:38.639688   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:38.639702   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:38.639714   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:38.661187   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:38.661202   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:41.230491   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:41.248063   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:41.267063   23204 logs.go:276] 0 containers: []
	W0213 16:06:41.267077   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:41.267153   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:41.285751   23204 logs.go:276] 0 containers: []
	W0213 16:06:41.285765   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:41.285831   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:41.303981   23204 logs.go:276] 0 containers: []
	W0213 16:06:41.303995   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:41.304061   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:41.322783   23204 logs.go:276] 0 containers: []
	W0213 16:06:41.322797   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:41.322872   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:41.341906   23204 logs.go:276] 0 containers: []
	W0213 16:06:41.341920   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:41.341985   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:41.361943   23204 logs.go:276] 0 containers: []
	W0213 16:06:41.361957   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:41.362029   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:41.380925   23204 logs.go:276] 0 containers: []
	W0213 16:06:41.380940   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:41.381008   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:41.401210   23204 logs.go:276] 0 containers: []
	W0213 16:06:41.401225   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:41.401233   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:41.401243   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:41.444527   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:41.444549   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:41.465064   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:41.465102   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:41.531320   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:41.531334   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:41.531341   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:41.552924   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:41.552940   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:44.119663   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:44.137105   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:44.177327   23204 logs.go:276] 0 containers: []
	W0213 16:06:44.177344   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:44.177409   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:44.196667   23204 logs.go:276] 0 containers: []
	W0213 16:06:44.196681   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:44.196748   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:44.217301   23204 logs.go:276] 0 containers: []
	W0213 16:06:44.217316   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:44.217392   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:44.237862   23204 logs.go:276] 0 containers: []
	W0213 16:06:44.237875   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:44.237954   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:44.256685   23204 logs.go:276] 0 containers: []
	W0213 16:06:44.256699   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:44.256784   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:44.276758   23204 logs.go:276] 0 containers: []
	W0213 16:06:44.276772   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:44.276846   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:44.296838   23204 logs.go:276] 0 containers: []
	W0213 16:06:44.296851   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:44.296918   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:44.317038   23204 logs.go:276] 0 containers: []
	W0213 16:06:44.317051   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:44.317058   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:44.317064   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:44.338549   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:44.338564   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:44.403691   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:44.403706   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:44.447056   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:44.447071   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:44.468062   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:44.468145   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:44.533082   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:47.034230   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:47.052074   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:47.070932   23204 logs.go:276] 0 containers: []
	W0213 16:06:47.070946   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:47.071010   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:47.090609   23204 logs.go:276] 0 containers: []
	W0213 16:06:47.090625   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:47.090701   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:47.110859   23204 logs.go:276] 0 containers: []
	W0213 16:06:47.110871   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:47.110932   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:47.130837   23204 logs.go:276] 0 containers: []
	W0213 16:06:47.130850   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:47.130921   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:47.150552   23204 logs.go:276] 0 containers: []
	W0213 16:06:47.150566   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:47.150642   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:47.171211   23204 logs.go:276] 0 containers: []
	W0213 16:06:47.171225   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:47.171294   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:47.192077   23204 logs.go:276] 0 containers: []
	W0213 16:06:47.192093   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:47.192158   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:47.211592   23204 logs.go:276] 0 containers: []
	W0213 16:06:47.211607   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:47.211614   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:47.211621   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:47.277233   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:47.277250   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:47.322830   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:47.322857   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:47.344753   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:47.344787   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:47.414534   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:47.414549   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:47.414560   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:49.937219   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:49.956116   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:49.976815   23204 logs.go:276] 0 containers: []
	W0213 16:06:49.976829   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:49.976895   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:49.996231   23204 logs.go:276] 0 containers: []
	W0213 16:06:49.996244   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:49.996327   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:50.016122   23204 logs.go:276] 0 containers: []
	W0213 16:06:50.016151   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:50.016216   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:50.034981   23204 logs.go:276] 0 containers: []
	W0213 16:06:50.034996   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:50.035067   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:50.055131   23204 logs.go:276] 0 containers: []
	W0213 16:06:50.055144   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:50.055210   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:50.074693   23204 logs.go:276] 0 containers: []
	W0213 16:06:50.074706   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:50.074768   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:50.094155   23204 logs.go:276] 0 containers: []
	W0213 16:06:50.094168   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:50.094260   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:50.117831   23204 logs.go:276] 0 containers: []
	W0213 16:06:50.117845   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:50.117851   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:50.117859   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:50.137877   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:50.137893   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:50.206221   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:50.206234   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:50.206248   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:50.227555   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:50.238810   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:50.308254   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:50.308270   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:52.853636   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:52.875237   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:52.895162   23204 logs.go:276] 0 containers: []
	W0213 16:06:52.895177   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:52.895243   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:52.922809   23204 logs.go:276] 0 containers: []
	W0213 16:06:52.922822   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:52.922882   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:52.943679   23204 logs.go:276] 0 containers: []
	W0213 16:06:52.943693   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:52.943767   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:52.964526   23204 logs.go:276] 0 containers: []
	W0213 16:06:52.964541   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:52.964610   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:52.983233   23204 logs.go:276] 0 containers: []
	W0213 16:06:52.983249   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:52.983322   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:53.005049   23204 logs.go:276] 0 containers: []
	W0213 16:06:53.005062   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:53.005130   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:53.026461   23204 logs.go:276] 0 containers: []
	W0213 16:06:53.026479   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:53.026553   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:53.046688   23204 logs.go:276] 0 containers: []
	W0213 16:06:53.046703   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:53.046716   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:53.046731   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:53.097708   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:53.097731   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:53.120490   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:53.120513   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:53.189929   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:53.189959   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:53.189967   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:53.212296   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:53.212315   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:55.779053   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:55.795938   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:55.815494   23204 logs.go:276] 0 containers: []
	W0213 16:06:55.815520   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:55.815601   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:55.834915   23204 logs.go:276] 0 containers: []
	W0213 16:06:55.834928   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:55.835000   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:55.853845   23204 logs.go:276] 0 containers: []
	W0213 16:06:55.853858   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:55.853929   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:55.872716   23204 logs.go:276] 0 containers: []
	W0213 16:06:55.872730   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:55.872799   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:55.891974   23204 logs.go:276] 0 containers: []
	W0213 16:06:55.891987   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:55.892051   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:55.912069   23204 logs.go:276] 0 containers: []
	W0213 16:06:55.912083   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:55.912150   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:55.933564   23204 logs.go:276] 0 containers: []
	W0213 16:06:55.933578   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:55.933647   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:55.953304   23204 logs.go:276] 0 containers: []
	W0213 16:06:55.953319   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:55.953326   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:55.953334   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:55.998111   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:55.998130   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:56.018937   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:56.018953   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:56.090168   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:56.090180   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:56.090197   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:56.114537   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:56.114555   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:58.699809   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:58.717085   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:58.736601   23204 logs.go:276] 0 containers: []
	W0213 16:06:58.736625   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:58.736713   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:58.756976   23204 logs.go:276] 0 containers: []
	W0213 16:06:58.756988   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:58.757058   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:58.775336   23204 logs.go:276] 0 containers: []
	W0213 16:06:58.775350   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:58.775420   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:58.794895   23204 logs.go:276] 0 containers: []
	W0213 16:06:58.794909   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:58.794973   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:58.813693   23204 logs.go:276] 0 containers: []
	W0213 16:06:58.813707   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:58.813779   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:58.834442   23204 logs.go:276] 0 containers: []
	W0213 16:06:58.834455   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:58.834526   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:58.853628   23204 logs.go:276] 0 containers: []
	W0213 16:06:58.853642   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:58.853709   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:58.874226   23204 logs.go:276] 0 containers: []
	W0213 16:06:58.874241   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:58.874249   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:58.874258   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:58.918908   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:58.918930   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:58.940967   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:58.940986   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:59.010750   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:59.010778   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:59.010788   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:59.032921   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:59.032938   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:07:01.670843   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:07:01.688847   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:07:01.707712   23204 logs.go:276] 0 containers: []
	W0213 16:07:01.707726   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:07:01.707798   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:07:01.726763   23204 logs.go:276] 0 containers: []
	W0213 16:07:01.726780   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:07:01.726853   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:07:01.745728   23204 logs.go:276] 0 containers: []
	W0213 16:07:01.745742   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:07:01.745809   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:07:01.764786   23204 logs.go:276] 0 containers: []
	W0213 16:07:01.764801   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:07:01.764865   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:07:01.784588   23204 logs.go:276] 0 containers: []
	W0213 16:07:01.784603   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:07:01.784667   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:07:01.804107   23204 logs.go:276] 0 containers: []
	W0213 16:07:01.804120   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:07:01.804186   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:07:01.824939   23204 logs.go:276] 0 containers: []
	W0213 16:07:01.824953   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:07:01.825020   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:07:01.847345   23204 logs.go:276] 0 containers: []
	W0213 16:07:01.847359   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:07:01.847368   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:07:01.847374   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:07:01.894045   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:07:01.894063   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:07:01.919018   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:07:01.919034   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:07:01.987690   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:07:01.987707   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:07:01.987717   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:07:02.009473   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:07:02.009489   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:07:04.576753   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:07:04.594769   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:07:04.612281   23204 logs.go:276] 0 containers: []
	W0213 16:07:04.612316   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:07:04.612411   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:07:04.632386   23204 logs.go:276] 0 containers: []
	W0213 16:07:04.632401   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:07:04.632467   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:07:04.653301   23204 logs.go:276] 0 containers: []
	W0213 16:07:04.653317   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:07:04.653381   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:07:04.672749   23204 logs.go:276] 0 containers: []
	W0213 16:07:04.672763   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:07:04.672832   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:07:04.693469   23204 logs.go:276] 0 containers: []
	W0213 16:07:04.693484   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:07:04.693557   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:07:04.712727   23204 logs.go:276] 0 containers: []
	W0213 16:07:04.712742   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:07:04.712828   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:07:04.731537   23204 logs.go:276] 0 containers: []
	W0213 16:07:04.731552   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:07:04.731618   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:07:04.750678   23204 logs.go:276] 0 containers: []
	W0213 16:07:04.750692   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:07:04.750699   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:07:04.750707   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:07:04.830164   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:07:04.830183   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:07:04.830199   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:07:04.873186   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:07:04.873203   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:07:04.980459   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:07:04.980476   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:07:05.026446   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:07:05.026465   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:07:07.547198   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:07:07.564676   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:07:07.584452   23204 logs.go:276] 0 containers: []
	W0213 16:07:07.584482   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:07:07.584546   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:07:07.603961   23204 logs.go:276] 0 containers: []
	W0213 16:07:07.603976   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:07:07.604043   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:07:07.625951   23204 logs.go:276] 0 containers: []
	W0213 16:07:07.625961   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:07:07.626027   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:07:07.645335   23204 logs.go:276] 0 containers: []
	W0213 16:07:07.645350   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:07:07.645433   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:07:07.664760   23204 logs.go:276] 0 containers: []
	W0213 16:07:07.664773   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:07:07.664840   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:07:07.685916   23204 logs.go:276] 0 containers: []
	W0213 16:07:07.685929   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:07:07.685996   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:07:07.705424   23204 logs.go:276] 0 containers: []
	W0213 16:07:07.705439   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:07:07.705507   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:07:07.725077   23204 logs.go:276] 0 containers: []
	W0213 16:07:07.725092   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:07:07.725099   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:07:07.725107   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:07:07.768365   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:07:07.768381   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:07:07.789020   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:07:07.789036   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:07:07.856687   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:07:07.856700   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:07:07.856722   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:07:07.878857   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:07:07.878872   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:07:10.446464   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:07:10.462730   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:07:10.481747   23204 logs.go:276] 0 containers: []
	W0213 16:07:10.481762   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:07:10.481827   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:07:10.501297   23204 logs.go:276] 0 containers: []
	W0213 16:07:10.501312   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:07:10.501378   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:07:10.521630   23204 logs.go:276] 0 containers: []
	W0213 16:07:10.521644   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:07:10.521708   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:07:10.540843   23204 logs.go:276] 0 containers: []
	W0213 16:07:10.540859   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:07:10.540927   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:07:10.561550   23204 logs.go:276] 0 containers: []
	W0213 16:07:10.561566   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:07:10.561635   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:07:10.581426   23204 logs.go:276] 0 containers: []
	W0213 16:07:10.581440   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:07:10.581506   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:07:10.602519   23204 logs.go:276] 0 containers: []
	W0213 16:07:10.602533   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:07:10.602599   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:07:10.623990   23204 logs.go:276] 0 containers: []
	W0213 16:07:10.624004   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:07:10.624012   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:07:10.624021   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:07:10.690490   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:07:10.690502   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:07:10.690524   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:07:10.713138   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:07:10.713151   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:07:10.780997   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:07:10.781012   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:07:10.823764   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:07:10.823779   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:07:13.346412   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:07:13.364742   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:07:13.385605   23204 logs.go:276] 0 containers: []
	W0213 16:07:13.385618   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:07:13.385684   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:07:13.404659   23204 logs.go:276] 0 containers: []
	W0213 16:07:13.404674   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:07:13.404741   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:07:13.424194   23204 logs.go:276] 0 containers: []
	W0213 16:07:13.424208   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:07:13.424276   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:07:13.444479   23204 logs.go:276] 0 containers: []
	W0213 16:07:13.444495   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:07:13.444579   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:07:13.463851   23204 logs.go:276] 0 containers: []
	W0213 16:07:13.463865   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:07:13.463929   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:07:13.482756   23204 logs.go:276] 0 containers: []
	W0213 16:07:13.482771   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:07:13.482836   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:07:13.501234   23204 logs.go:276] 0 containers: []
	W0213 16:07:13.501248   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:07:13.501317   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:07:13.521980   23204 logs.go:276] 0 containers: []
	W0213 16:07:13.522000   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:07:13.522009   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:07:13.522016   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:07:13.566636   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:07:13.566655   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:07:13.589727   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:07:13.589758   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:07:13.687645   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:07:13.687657   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:07:13.687666   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:07:13.709059   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:07:13.709074   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:07:16.278222   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:07:16.295645   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:07:16.314050   23204 logs.go:276] 0 containers: []
	W0213 16:07:16.314065   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:07:16.314151   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:07:16.333624   23204 logs.go:276] 0 containers: []
	W0213 16:07:16.333639   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:07:16.333707   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:07:16.352203   23204 logs.go:276] 0 containers: []
	W0213 16:07:16.352217   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:07:16.352287   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:07:16.371605   23204 logs.go:276] 0 containers: []
	W0213 16:07:16.371620   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:07:16.371685   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:07:16.391243   23204 logs.go:276] 0 containers: []
	W0213 16:07:16.391259   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:07:16.391324   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:07:16.410532   23204 logs.go:276] 0 containers: []
	W0213 16:07:16.410546   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:07:16.410611   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:07:16.432222   23204 logs.go:276] 0 containers: []
	W0213 16:07:16.432236   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:07:16.432347   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:07:16.452388   23204 logs.go:276] 0 containers: []
	W0213 16:07:16.452403   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:07:16.452410   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:07:16.452418   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:07:16.473970   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:07:16.473983   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:07:16.542437   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:07:16.542453   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:07:16.592402   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:07:16.592421   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:07:16.615435   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:07:16.615452   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:07:16.683684   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:07:19.185219   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:07:19.203205   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:07:19.222329   23204 logs.go:276] 0 containers: []
	W0213 16:07:19.222343   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:07:19.222408   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:07:19.240682   23204 logs.go:276] 0 containers: []
	W0213 16:07:19.240695   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:07:19.240761   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:07:19.259831   23204 logs.go:276] 0 containers: []
	W0213 16:07:19.259847   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:07:19.259922   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:07:19.279989   23204 logs.go:276] 0 containers: []
	W0213 16:07:19.280003   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:07:19.280069   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:07:19.298953   23204 logs.go:276] 0 containers: []
	W0213 16:07:19.298968   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:07:19.299037   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:07:19.319608   23204 logs.go:276] 0 containers: []
	W0213 16:07:19.319623   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:07:19.319687   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:07:19.338179   23204 logs.go:276] 0 containers: []
	W0213 16:07:19.338193   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:07:19.338258   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:07:19.357191   23204 logs.go:276] 0 containers: []
	W0213 16:07:19.357205   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:07:19.357212   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:07:19.357237   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:07:19.401102   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:07:19.401118   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:07:19.422599   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:07:19.422656   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:07:19.495266   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:07:19.495300   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:07:19.495308   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:07:19.517454   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:07:19.517494   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:07:22.088434   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:07:22.106810   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:07:22.126803   23204 logs.go:276] 0 containers: []
	W0213 16:07:22.126816   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:07:22.126880   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:07:22.148572   23204 logs.go:276] 0 containers: []
	W0213 16:07:22.148587   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:07:22.148665   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:07:22.170036   23204 logs.go:276] 0 containers: []
	W0213 16:07:22.170051   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:07:22.170115   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:07:22.189367   23204 logs.go:276] 0 containers: []
	W0213 16:07:22.189382   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:07:22.189449   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:07:22.208923   23204 logs.go:276] 0 containers: []
	W0213 16:07:22.208938   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:07:22.209008   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:07:22.229102   23204 logs.go:276] 0 containers: []
	W0213 16:07:22.229116   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:07:22.229184   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:07:22.248885   23204 logs.go:276] 0 containers: []
	W0213 16:07:22.248899   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:07:22.248963   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:07:22.267890   23204 logs.go:276] 0 containers: []
	W0213 16:07:22.267905   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:07:22.267912   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:07:22.267919   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:07:22.314020   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:07:22.314036   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:07:22.334595   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:07:22.334611   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:07:22.413063   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:07:22.413102   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:07:22.413124   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:07:22.434670   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:07:22.434684   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:07:24.999557   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:07:25.016568   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:07:25.037462   23204 logs.go:276] 0 containers: []
	W0213 16:07:25.037475   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:07:25.037547   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:07:25.058411   23204 logs.go:276] 0 containers: []
	W0213 16:07:25.058424   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:07:25.058492   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:07:25.079318   23204 logs.go:276] 0 containers: []
	W0213 16:07:25.079332   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:07:25.079404   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:07:25.102107   23204 logs.go:276] 0 containers: []
	W0213 16:07:25.102122   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:07:25.102214   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:07:25.122393   23204 logs.go:276] 0 containers: []
	W0213 16:07:25.122406   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:07:25.122471   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:07:25.142193   23204 logs.go:276] 0 containers: []
	W0213 16:07:25.142212   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:07:25.142320   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:07:25.169785   23204 logs.go:276] 0 containers: []
	W0213 16:07:25.169800   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:07:25.169881   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:07:25.189929   23204 logs.go:276] 0 containers: []
	W0213 16:07:25.189944   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:07:25.189951   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:07:25.189958   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:07:25.236980   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:07:25.237000   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:07:25.258559   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:07:25.258575   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:07:25.336280   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:07:25.336319   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:07:25.336327   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:07:25.359020   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:07:25.359035   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:07:27.923949   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:07:27.941344   23204 kubeadm.go:640] restartCluster took 4m12.442168861s
	W0213 16:07:27.941388   23204 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0213 16:07:27.941407   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0213 16:07:28.367255   23204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 16:07:28.384676   23204 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 16:07:28.400446   23204 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0213 16:07:28.400537   23204 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 16:07:28.415605   23204 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 16:07:28.415636   23204 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0213 16:07:28.472081   23204 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0213 16:07:28.472580   23204 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 16:07:28.727540   23204 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 16:07:28.727715   23204 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 16:07:28.727804   23204 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 16:07:28.904696   23204 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 16:07:28.906558   23204 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 16:07:28.913307   23204 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0213 16:07:28.984848   23204 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 16:07:29.006643   23204 out.go:204]   - Generating certificates and keys ...
	I0213 16:07:29.006777   23204 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 16:07:29.006913   23204 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 16:07:29.007031   23204 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 16:07:29.007125   23204 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 16:07:29.007195   23204 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 16:07:29.007249   23204 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 16:07:29.007374   23204 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 16:07:29.007502   23204 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 16:07:29.007638   23204 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 16:07:29.007718   23204 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 16:07:29.007803   23204 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 16:07:29.007918   23204 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 16:07:29.373507   23204 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 16:07:29.623325   23204 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 16:07:29.737480   23204 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 16:07:29.926003   23204 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 16:07:29.926937   23204 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 16:07:29.948773   23204 out.go:204]   - Booting up control plane ...
	I0213 16:07:29.948903   23204 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 16:07:29.948984   23204 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 16:07:29.949038   23204 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 16:07:29.949096   23204 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 16:07:29.949220   23204 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 16:08:09.935613   23204 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0213 16:08:09.936349   23204 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 16:08:09.936546   23204 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 16:08:14.937992   23204 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 16:08:14.938152   23204 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 16:08:24.939211   23204 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 16:08:24.939460   23204 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 16:08:44.940517   23204 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 16:08:44.940747   23204 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 16:09:24.944560   23204 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 16:09:24.944714   23204 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 16:09:24.944726   23204 kubeadm.go:322] 
	I0213 16:09:24.944754   23204 kubeadm.go:322] Unfortunately, an error has occurred:
	I0213 16:09:24.944788   23204 kubeadm.go:322] 	timed out waiting for the condition
	I0213 16:09:24.944799   23204 kubeadm.go:322] 
	I0213 16:09:24.944829   23204 kubeadm.go:322] This error is likely caused by:
	I0213 16:09:24.944854   23204 kubeadm.go:322] 	- The kubelet is not running
	I0213 16:09:24.944939   23204 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0213 16:09:24.944949   23204 kubeadm.go:322] 
	I0213 16:09:24.945021   23204 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0213 16:09:24.945044   23204 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0213 16:09:24.945068   23204 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0213 16:09:24.945074   23204 kubeadm.go:322] 
	I0213 16:09:24.945171   23204 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0213 16:09:24.945256   23204 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0213 16:09:24.945325   23204 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0213 16:09:24.945369   23204 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0213 16:09:24.945424   23204 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0213 16:09:24.945450   23204 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0213 16:09:24.948369   23204 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0213 16:09:24.948433   23204 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0213 16:09:24.948548   23204 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0213 16:09:24.948633   23204 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 16:09:24.948707   23204 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0213 16:09:24.948764   23204 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0213 16:09:24.948838   23204 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0213 16:09:24.948870   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0213 16:09:25.381552   23204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 16:09:25.398785   23204 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0213 16:09:25.398853   23204 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 16:09:25.415534   23204 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 16:09:25.415561   23204 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0213 16:09:25.470972   23204 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0213 16:09:25.471013   23204 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 16:09:25.728855   23204 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 16:09:25.728999   23204 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 16:09:25.729111   23204 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 16:09:25.912400   23204 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 16:09:25.913158   23204 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 16:09:25.919829   23204 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0213 16:09:25.982141   23204 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 16:09:26.004839   23204 out.go:204]   - Generating certificates and keys ...
	I0213 16:09:26.004923   23204 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 16:09:26.004996   23204 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 16:09:26.005054   23204 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 16:09:26.005105   23204 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 16:09:26.005198   23204 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 16:09:26.005259   23204 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 16:09:26.005311   23204 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 16:09:26.005443   23204 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 16:09:26.005546   23204 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 16:09:26.005646   23204 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 16:09:26.005707   23204 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 16:09:26.005797   23204 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 16:09:26.109033   23204 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 16:09:26.222229   23204 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 16:09:26.361237   23204 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 16:09:26.518292   23204 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 16:09:26.518801   23204 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 16:09:26.548808   23204 out.go:204]   - Booting up control plane ...
	I0213 16:09:26.548949   23204 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 16:09:26.549087   23204 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 16:09:26.549176   23204 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 16:09:26.549321   23204 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 16:09:26.549634   23204 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 16:10:06.528417   23204 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0213 16:10:06.530098   23204 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 16:10:06.530405   23204 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 16:10:11.531782   23204 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 16:10:11.531959   23204 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 16:10:21.532836   23204 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 16:10:21.533022   23204 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 16:10:41.535320   23204 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 16:10:41.535554   23204 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 16:11:21.534963   23204 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 16:11:21.535176   23204 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 16:11:21.535189   23204 kubeadm.go:322] 
	I0213 16:11:21.535259   23204 kubeadm.go:322] Unfortunately, an error has occurred:
	I0213 16:11:21.535294   23204 kubeadm.go:322] 	timed out waiting for the condition
	I0213 16:11:21.535299   23204 kubeadm.go:322] 
	I0213 16:11:21.535323   23204 kubeadm.go:322] This error is likely caused by:
	I0213 16:11:21.535348   23204 kubeadm.go:322] 	- The kubelet is not running
	I0213 16:11:21.535437   23204 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0213 16:11:21.535447   23204 kubeadm.go:322] 
	I0213 16:11:21.535518   23204 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0213 16:11:21.535549   23204 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0213 16:11:21.535598   23204 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0213 16:11:21.535611   23204 kubeadm.go:322] 
	I0213 16:11:21.535719   23204 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0213 16:11:21.535833   23204 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0213 16:11:21.535954   23204 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0213 16:11:21.536018   23204 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0213 16:11:21.536119   23204 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0213 16:11:21.536146   23204 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0213 16:11:21.541193   23204 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0213 16:11:21.541292   23204 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0213 16:11:21.541420   23204 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0213 16:11:21.541537   23204 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 16:11:21.541659   23204 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0213 16:11:21.541779   23204 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0213 16:11:21.541823   23204 kubeadm.go:406] StartCluster complete in 8m6.083286501s
	I0213 16:11:21.541910   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:11:21.566985   23204 logs.go:276] 0 containers: []
	W0213 16:11:21.567021   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:11:21.567086   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:11:21.587296   23204 logs.go:276] 0 containers: []
	W0213 16:11:21.587310   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:11:21.587378   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:11:21.613957   23204 logs.go:276] 0 containers: []
	W0213 16:11:21.613976   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:11:21.614072   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:11:21.638991   23204 logs.go:276] 0 containers: []
	W0213 16:11:21.639005   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:11:21.639104   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:11:21.657743   23204 logs.go:276] 0 containers: []
	W0213 16:11:21.657757   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:11:21.657821   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:11:21.677535   23204 logs.go:276] 0 containers: []
	W0213 16:11:21.677551   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:11:21.677616   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:11:21.698531   23204 logs.go:276] 0 containers: []
	W0213 16:11:21.698559   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:11:21.698708   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:11:21.726649   23204 logs.go:276] 0 containers: []
	W0213 16:11:21.726688   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:11:21.726705   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:11:21.726735   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:11:21.773929   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:11:21.773944   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:11:21.794784   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:11:21.794807   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:11:21.874118   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:11:21.874131   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:11:21.874156   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:11:21.899202   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:11:21.899229   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0213 16:11:21.974248   23204 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0213 16:11:21.974270   23204 out.go:239] * 
	* 
	W0213 16:11:21.974321   23204 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0213 16:11:21.974335   23204 out.go:239] * 
	* 
	W0213 16:11:21.975059   23204 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 16:11:22.059690   23204 out.go:177] 
	W0213 16:11:22.101576   23204 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0213 16:11:22.101645   23204 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0213 16:11:22.101683   23204 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0213 16:11:22.122643   23204 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-745000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-745000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-745000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7",
	        "Created": "2024-02-13T23:56:55.870618044Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 384222,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-14T00:02:56.378811968Z",
	            "FinishedAt": "2024-02-14T00:02:53.615023812Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7/hostname",
	        "HostsPath": "/var/lib/docker/containers/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7/hosts",
	        "LogPath": "/var/lib/docker/containers/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7-json.log",
	        "Name": "/old-k8s-version-745000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-745000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-745000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3b6060eec3a1586a55a82fcdff198c231a72ae85ee6f6b4c6b13c556de95556d-init/diff:/var/lib/docker/overlay2/17d01b22a52da825ae58e67decfe3f4c8ae2f6fe80510c1be556e233e058ce7a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3b6060eec3a1586a55a82fcdff198c231a72ae85ee6f6b4c6b13c556de95556d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3b6060eec3a1586a55a82fcdff198c231a72ae85ee6f6b4c6b13c556de95556d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3b6060eec3a1586a55a82fcdff198c231a72ae85ee6f6b4c6b13c556de95556d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-745000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-745000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-745000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-745000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-745000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f2a64fcfa6aa11a20fff2e331cf5eccb1c94776e7c7a038087879a448cd30e88",
	            "SandboxKey": "/var/run/docker/netns/f2a64fcfa6aa",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56672"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56673"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56674"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56675"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56676"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-745000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2b4f372aa246",
	                        "old-k8s-version-745000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "e9fab362389ee13cca953b7169efcc99796a0092a501ddc4284447becaba8d37",
	                    "EndpointID": "591102cebfe18f51413c628ffec03eb73caab8e92285d1cbd8a06cabbd6bb2f8",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-745000",
	                        "2b4f372aa246"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-745000 -n old-k8s-version-745000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-745000 -n old-k8s-version-745000: exit status 2 (485.415256ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-745000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-745000 logs -n 25: (2.260705086s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-208000 sudo                                  | bridge-208000          | jenkins | v1.32.0 | 13 Feb 24 15:57 PST | 13 Feb 24 15:57 PST |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-208000 sudo                                  | bridge-208000          | jenkins | v1.32.0 | 13 Feb 24 15:57 PST |                     |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p bridge-208000 sudo                                  | bridge-208000          | jenkins | v1.32.0 | 13 Feb 24 15:57 PST | 13 Feb 24 15:57 PST |
	|         | systemctl cat crio --no-pager                          |                        |         |         |                     |                     |
	| ssh     | -p bridge-208000 sudo find                             | bridge-208000          | jenkins | v1.32.0 | 13 Feb 24 15:57 PST | 13 Feb 24 15:57 PST |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p bridge-208000 sudo crio                             | bridge-208000          | jenkins | v1.32.0 | 13 Feb 24 15:57 PST | 13 Feb 24 15:57 PST |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p bridge-208000                                       | bridge-208000          | jenkins | v1.32.0 | 13 Feb 24 15:57 PST | 13 Feb 24 15:57 PST |
	| start   | -p no-preload-476000                                   | no-preload-476000      | jenkins | v1.32.0 | 13 Feb 24 15:57 PST | 13 Feb 24 15:58 PST |
	|         | --memory=2200 --alsologtostderr                        |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-476000             | no-preload-476000      | jenkins | v1.32.0 | 13 Feb 24 15:58 PST | 13 Feb 24 15:58 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p no-preload-476000                                   | no-preload-476000      | jenkins | v1.32.0 | 13 Feb 24 15:58 PST | 13 Feb 24 15:58 PST |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-476000                  | no-preload-476000      | jenkins | v1.32.0 | 13 Feb 24 15:58 PST | 13 Feb 24 15:58 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-476000                                   | no-preload-476000      | jenkins | v1.32.0 | 13 Feb 24 15:58 PST | 13 Feb 24 16:04 PST |
	|         | --memory=2200 --alsologtostderr                        |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-745000        | old-k8s-version-745000 | jenkins | v1.32.0 | 13 Feb 24 16:01 PST |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-745000                              | old-k8s-version-745000 | jenkins | v1.32.0 | 13 Feb 24 16:02 PST | 13 Feb 24 16:02 PST |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-745000             | old-k8s-version-745000 | jenkins | v1.32.0 | 13 Feb 24 16:02 PST | 13 Feb 24 16:02 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-745000                              | old-k8s-version-745000 | jenkins | v1.32.0 | 13 Feb 24 16:02 PST |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                        |         |         |                     |                     |
	| image   | no-preload-476000 image list                           | no-preload-476000      | jenkins | v1.32.0 | 13 Feb 24 16:04 PST | 13 Feb 24 16:04 PST |
	|         | --format=json                                          |                        |         |         |                     |                     |
	| pause   | -p no-preload-476000                                   | no-preload-476000      | jenkins | v1.32.0 | 13 Feb 24 16:04 PST | 13 Feb 24 16:04 PST |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| unpause | -p no-preload-476000                                   | no-preload-476000      | jenkins | v1.32.0 | 13 Feb 24 16:04 PST | 13 Feb 24 16:04 PST |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| delete  | -p no-preload-476000                                   | no-preload-476000      | jenkins | v1.32.0 | 13 Feb 24 16:04 PST | 13 Feb 24 16:04 PST |
	| delete  | -p no-preload-476000                                   | no-preload-476000      | jenkins | v1.32.0 | 13 Feb 24 16:04 PST | 13 Feb 24 16:04 PST |
	| start   | -p embed-certs-743000                                  | embed-certs-743000     | jenkins | v1.32.0 | 13 Feb 24 16:04 PST | 13 Feb 24 16:05 PST |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-743000            | embed-certs-743000     | jenkins | v1.32.0 | 13 Feb 24 16:05 PST | 13 Feb 24 16:05 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p embed-certs-743000                                  | embed-certs-743000     | jenkins | v1.32.0 | 13 Feb 24 16:06 PST | 13 Feb 24 16:06 PST |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-743000                 | embed-certs-743000     | jenkins | v1.32.0 | 13 Feb 24 16:06 PST | 13 Feb 24 16:06 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p embed-certs-743000                                  | embed-certs-743000     | jenkins | v1.32.0 | 13 Feb 24 16:06 PST |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                        |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 16:06:11
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 16:06:11.508603   23626 out.go:291] Setting OutFile to fd 1 ...
	I0213 16:06:11.508779   23626 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 16:06:11.508784   23626 out.go:304] Setting ErrFile to fd 2...
	I0213 16:06:11.508788   23626 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 16:06:11.508977   23626 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18169-6320/.minikube/bin
	I0213 16:06:11.510430   23626 out.go:298] Setting JSON to false
	I0213 16:06:11.534380   23626 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":6031,"bootTime":1707863140,"procs":513,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0213 16:06:11.534517   23626 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 16:06:11.557037   23626 out.go:177] * [embed-certs-743000] minikube v1.32.0 on Darwin 14.3.1
	I0213 16:06:11.600787   23626 out.go:177]   - MINIKUBE_LOCATION=18169
	I0213 16:06:11.600835   23626 notify.go:220] Checking for updates...
	I0213 16:06:11.644601   23626 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18169-6320/kubeconfig
	I0213 16:06:11.666631   23626 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0213 16:06:11.688778   23626 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 16:06:11.710587   23626 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18169-6320/.minikube
	I0213 16:06:11.731513   23626 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 16:06:11.753117   23626 config.go:182] Loaded profile config "embed-certs-743000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 16:06:11.753695   23626 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 16:06:11.809428   23626 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0213 16:06:11.809589   23626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 16:06:11.920078   23626 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-14 00:06:11.908493411 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 16:06:11.964730   23626 out.go:177] * Using the docker driver based on existing profile
	I0213 16:06:11.985975   23626 start.go:298] selected driver: docker
	I0213 16:06:11.985992   23626 start.go:902] validating driver "docker" against &{Name:embed-certs-743000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-743000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 16:06:11.986059   23626 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 16:06:11.989443   23626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 16:06:12.114035   23626 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-14 00:06:12.102735219 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 16:06:12.114296   23626 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 16:06:12.114354   23626 cni.go:84] Creating CNI manager for ""
	I0213 16:06:12.114374   23626 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 16:06:12.114385   23626 start_flags.go:321] config:
	{Name:embed-certs-743000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-743000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 16:06:12.156251   23626 out.go:177] * Starting control plane node embed-certs-743000 in cluster embed-certs-743000
	I0213 16:06:12.177440   23626 cache.go:121] Beginning downloading kic base image for docker with docker
	I0213 16:06:12.198432   23626 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0213 16:06:12.240497   23626 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 16:06:12.240524   23626 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0213 16:06:12.240544   23626 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0213 16:06:12.240556   23626 cache.go:56] Caching tarball of preloaded images
	I0213 16:06:12.240666   23626 preload.go:174] Found /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0213 16:06:12.240676   23626 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0213 16:06:12.240765   23626 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/embed-certs-743000/config.json ...
	I0213 16:06:12.296537   23626 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0213 16:06:12.296559   23626 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0213 16:06:12.296580   23626 cache.go:194] Successfully downloaded all kic artifacts
	I0213 16:06:12.296618   23626 start.go:365] acquiring machines lock for embed-certs-743000: {Name:mkd724e10ef31ac2ff17f479b68dd352dfbf016f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 16:06:12.296727   23626 start.go:369] acquired machines lock for "embed-certs-743000" in 89.73µs
	I0213 16:06:12.296756   23626 start.go:96] Skipping create...Using existing machine configuration
	I0213 16:06:12.296768   23626 fix.go:54] fixHost starting: 
	I0213 16:06:12.297028   23626 cli_runner.go:164] Run: docker container inspect embed-certs-743000 --format={{.State.Status}}
	I0213 16:06:12.351682   23626 fix.go:102] recreateIfNeeded on embed-certs-743000: state=Stopped err=<nil>
	W0213 16:06:12.351735   23626 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 16:06:12.373538   23626 out.go:177] * Restarting existing docker container for "embed-certs-743000" ...
	I0213 16:06:11.878364   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:11.899479   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:11.921300   23204 logs.go:276] 0 containers: []
	W0213 16:06:11.921311   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:11.921372   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:11.940349   23204 logs.go:276] 0 containers: []
	W0213 16:06:11.940365   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:11.940433   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:11.958295   23204 logs.go:276] 0 containers: []
	W0213 16:06:11.958308   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:11.958374   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:11.978133   23204 logs.go:276] 0 containers: []
	W0213 16:06:11.978147   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:11.978212   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:11.996930   23204 logs.go:276] 0 containers: []
	W0213 16:06:11.996949   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:11.997009   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:12.018116   23204 logs.go:276] 0 containers: []
	W0213 16:06:12.018129   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:12.018204   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:12.038622   23204 logs.go:276] 0 containers: []
	W0213 16:06:12.038636   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:12.038695   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:12.073503   23204 logs.go:276] 0 containers: []
	W0213 16:06:12.073523   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:12.073534   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:12.073545   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:12.128415   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:12.128435   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:12.151550   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:12.151571   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:12.232156   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:12.232167   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:12.232175   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:12.256305   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:12.256321   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:14.823991   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:14.840749   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:14.857498   23204 logs.go:276] 0 containers: []
	W0213 16:06:14.857521   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:14.857596   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:14.875478   23204 logs.go:276] 0 containers: []
	W0213 16:06:14.875492   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:14.875559   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:14.892613   23204 logs.go:276] 0 containers: []
	W0213 16:06:14.892629   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:14.892699   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:14.910463   23204 logs.go:276] 0 containers: []
	W0213 16:06:14.910477   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:14.910539   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:14.928500   23204 logs.go:276] 0 containers: []
	W0213 16:06:14.928514   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:14.928578   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:14.946365   23204 logs.go:276] 0 containers: []
	W0213 16:06:14.946381   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:14.946468   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:14.963486   23204 logs.go:276] 0 containers: []
	W0213 16:06:14.963500   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:14.963565   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:14.981360   23204 logs.go:276] 0 containers: []
	W0213 16:06:14.981374   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:14.981381   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:14.981389   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:15.039828   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:15.039856   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:15.039865   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:15.063337   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:15.063352   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:15.179124   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:15.179140   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:12.416403   23626 cli_runner.go:164] Run: docker start embed-certs-743000
	I0213 16:06:12.676689   23626 cli_runner.go:164] Run: docker container inspect embed-certs-743000 --format={{.State.Status}}
	I0213 16:06:12.734548   23626 kic.go:430] container "embed-certs-743000" state is running.
	I0213 16:06:12.735167   23626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-743000
	I0213 16:06:12.795114   23626 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/embed-certs-743000/config.json ...
	I0213 16:06:12.795665   23626 machine.go:88] provisioning docker machine ...
	I0213 16:06:12.795699   23626 ubuntu.go:169] provisioning hostname "embed-certs-743000"
	I0213 16:06:12.795791   23626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-743000
	I0213 16:06:12.873436   23626 main.go:141] libmachine: Using SSH client type: native
	I0213 16:06:12.873830   23626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 56812 <nil> <nil>}
	I0213 16:06:12.873845   23626 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-743000 && echo "embed-certs-743000" | sudo tee /etc/hostname
	I0213 16:06:12.875071   23626 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0213 16:06:16.037002   23626 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-743000
	
	I0213 16:06:16.037102   23626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-743000
	I0213 16:06:16.094939   23626 main.go:141] libmachine: Using SSH client type: native
	I0213 16:06:16.095262   23626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 56812 <nil> <nil>}
	I0213 16:06:16.095277   23626 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-743000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-743000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-743000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 16:06:16.241800   23626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 16:06:16.241820   23626 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/18169-6320/.minikube CaCertPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18169-6320/.minikube}
	I0213 16:06:16.241842   23626 ubuntu.go:177] setting up certificates
	I0213 16:06:16.241848   23626 provision.go:83] configureAuth start
	I0213 16:06:16.241927   23626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-743000
	I0213 16:06:16.293574   23626 provision.go:138] copyHostCerts
	I0213 16:06:16.293697   23626 exec_runner.go:144] found /Users/jenkins/minikube-integration/18169-6320/.minikube/key.pem, removing ...
	I0213 16:06:16.293711   23626 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18169-6320/.minikube/key.pem
	I0213 16:06:16.293860   23626 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18169-6320/.minikube/key.pem (1675 bytes)
	I0213 16:06:16.294107   23626 exec_runner.go:144] found /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.pem, removing ...
	I0213 16:06:16.294114   23626 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.pem
	I0213 16:06:16.294191   23626 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.pem (1078 bytes)
	I0213 16:06:16.294362   23626 exec_runner.go:144] found /Users/jenkins/minikube-integration/18169-6320/.minikube/cert.pem, removing ...
	I0213 16:06:16.294368   23626 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18169-6320/.minikube/cert.pem
	I0213 16:06:16.294449   23626 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18169-6320/.minikube/cert.pem (1123 bytes)
	I0213 16:06:16.294610   23626 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca-key.pem org=jenkins.embed-certs-743000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-743000]
	I0213 16:06:16.593499   23626 provision.go:172] copyRemoteCerts
	I0213 16:06:16.593613   23626 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 16:06:16.593707   23626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-743000
	I0213 16:06:16.645171   23626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56812 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/embed-certs-743000/id_rsa Username:docker}
	I0213 16:06:16.751518   23626 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0213 16:06:16.792551   23626 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 16:06:16.842991   23626 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0213 16:06:16.888319   23626 provision.go:86] duration metric: configureAuth took 646.467334ms
	I0213 16:06:16.888350   23626 ubuntu.go:193] setting minikube options for container-runtime
	I0213 16:06:16.888530   23626 config.go:182] Loaded profile config "embed-certs-743000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 16:06:16.888623   23626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-743000
	I0213 16:06:16.942987   23626 main.go:141] libmachine: Using SSH client type: native
	I0213 16:06:16.943271   23626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 56812 <nil> <nil>}
	I0213 16:06:16.943283   23626 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0213 16:06:17.083807   23626 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0213 16:06:17.083823   23626 ubuntu.go:71] root file system type: overlay
	I0213 16:06:17.083912   23626 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0213 16:06:17.083987   23626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-743000
	I0213 16:06:17.137364   23626 main.go:141] libmachine: Using SSH client type: native
	I0213 16:06:17.137651   23626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 56812 <nil> <nil>}
	I0213 16:06:17.137699   23626 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0213 16:06:17.300759   23626 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0213 16:06:17.300890   23626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-743000
	I0213 16:06:17.354241   23626 main.go:141] libmachine: Using SSH client type: native
	I0213 16:06:17.354528   23626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 56812 <nil> <nil>}
	I0213 16:06:17.354542   23626 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0213 16:06:17.503387   23626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 16:06:17.503406   23626 machine.go:91] provisioned docker machine in 4.707833322s
	I0213 16:06:17.503415   23626 start.go:300] post-start starting for "embed-certs-743000" (driver="docker")
	I0213 16:06:17.503422   23626 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 16:06:17.503505   23626 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 16:06:17.503570   23626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-743000
	I0213 16:06:17.557955   23626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56812 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/embed-certs-743000/id_rsa Username:docker}
	I0213 16:06:17.664891   23626 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 16:06:17.669799   23626 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0213 16:06:17.669820   23626 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0213 16:06:17.669842   23626 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0213 16:06:17.669848   23626 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0213 16:06:17.669856   23626 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18169-6320/.minikube/addons for local assets ...
	I0213 16:06:17.669975   23626 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18169-6320/.minikube/files for local assets ...
	I0213 16:06:17.670195   23626 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem -> 67762.pem in /etc/ssl/certs
	I0213 16:06:17.670430   23626 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 16:06:17.684963   23626 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem --> /etc/ssl/certs/67762.pem (1708 bytes)
	I0213 16:06:17.725679   23626 start.go:303] post-start completed in 222.259328ms
	I0213 16:06:17.725827   23626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0213 16:06:17.725932   23626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-743000
	I0213 16:06:17.781232   23626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56812 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/embed-certs-743000/id_rsa Username:docker}
	I0213 16:06:17.875654   23626 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0213 16:06:17.881161   23626 fix.go:56] fixHost completed within 5.584513585s
	I0213 16:06:17.881179   23626 start.go:83] releasing machines lock for "embed-certs-743000", held for 5.584562908s
	I0213 16:06:17.881262   23626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-743000
	I0213 16:06:17.937586   23626 ssh_runner.go:195] Run: cat /version.json
	I0213 16:06:17.937593   23626 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 16:06:17.937658   23626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-743000
	I0213 16:06:17.937670   23626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-743000
	I0213 16:06:18.000110   23626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56812 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/embed-certs-743000/id_rsa Username:docker}
	I0213 16:06:18.000238   23626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56812 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/embed-certs-743000/id_rsa Username:docker}
	I0213 16:06:18.202511   23626 ssh_runner.go:195] Run: systemctl --version
	I0213 16:06:18.207760   23626 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0213 16:06:18.213544   23626 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0213 16:06:18.244944   23626 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0213 16:06:18.245032   23626 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 16:06:18.261960   23626 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0213 16:06:18.261988   23626 start.go:475] detecting cgroup driver to use...
	I0213 16:06:18.262000   23626 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 16:06:18.262108   23626 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 16:06:18.290032   23626 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0213 16:06:18.308128   23626 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0213 16:06:18.326092   23626 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0213 16:06:18.326166   23626 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0213 16:06:18.352591   23626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 16:06:18.373767   23626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0213 16:06:18.392809   23626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 16:06:18.408930   23626 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 16:06:18.426811   23626 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0213 16:06:18.443244   23626 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 16:06:18.458491   23626 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 16:06:18.472986   23626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 16:06:18.537782   23626 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0213 16:06:18.624280   23626 start.go:475] detecting cgroup driver to use...
	I0213 16:06:18.624361   23626 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 16:06:18.624438   23626 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0213 16:06:18.643912   23626 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0213 16:06:18.643986   23626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0213 16:06:18.664246   23626 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 16:06:18.699532   23626 ssh_runner.go:195] Run: which cri-dockerd
	I0213 16:06:18.707671   23626 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0213 16:06:18.731111   23626 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0213 16:06:18.763231   23626 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0213 16:06:18.863326   23626 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0213 16:06:18.960922   23626 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0213 16:06:18.961018   23626 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0213 16:06:18.990607   23626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 16:06:19.059058   23626 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 16:06:19.376306   23626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0213 16:06:19.394810   23626 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0213 16:06:19.413956   23626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0213 16:06:19.431785   23626 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0213 16:06:19.496100   23626 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0213 16:06:19.561014   23626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 16:06:19.625356   23626 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0213 16:06:19.662528   23626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0213 16:06:19.680024   23626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 16:06:19.747176   23626 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0213 16:06:19.840569   23626 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0213 16:06:19.840692   23626 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0213 16:06:19.845276   23626 start.go:543] Will wait 60s for crictl version
	I0213 16:06:19.845324   23626 ssh_runner.go:195] Run: which crictl
	I0213 16:06:19.849429   23626 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 16:06:19.902619   23626 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0213 16:06:19.902694   23626 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 16:06:19.924914   23626 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 16:06:15.223147   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:15.244191   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:17.764785   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:17.785042   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:17.804809   23204 logs.go:276] 0 containers: []
	W0213 16:06:17.804821   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:17.804889   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:17.822803   23204 logs.go:276] 0 containers: []
	W0213 16:06:17.822817   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:17.822885   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:17.842934   23204 logs.go:276] 0 containers: []
	W0213 16:06:17.842948   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:17.843015   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:17.860834   23204 logs.go:276] 0 containers: []
	W0213 16:06:17.860847   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:17.860910   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:17.881108   23204 logs.go:276] 0 containers: []
	W0213 16:06:17.881125   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:17.881210   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:17.901797   23204 logs.go:276] 0 containers: []
	W0213 16:06:17.901810   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:17.901879   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:17.922651   23204 logs.go:276] 0 containers: []
	W0213 16:06:17.922665   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:17.922795   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:17.942346   23204 logs.go:276] 0 containers: []
	W0213 16:06:17.942360   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:17.942369   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:17.942381   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:17.992362   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:17.992387   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:18.016279   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:18.016322   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:18.167950   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:18.167962   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:18.167983   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:18.190970   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:18.190986   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:19.993091   23626 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0213 16:06:19.993236   23626 cli_runner.go:164] Run: docker exec -t embed-certs-743000 dig +short host.docker.internal
	I0213 16:06:20.122968   23626 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0213 16:06:20.123114   23626 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0213 16:06:20.128083   23626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 16:06:20.146061   23626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-743000
	I0213 16:06:20.200637   23626 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 16:06:20.200728   23626 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 16:06:20.221222   23626 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0213 16:06:20.221242   23626 docker.go:615] Images already preloaded, skipping extraction
	I0213 16:06:20.221319   23626 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 16:06:20.241246   23626 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0213 16:06:20.241274   23626 cache_images.go:84] Images are preloaded, skipping loading
	I0213 16:06:20.241350   23626 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0213 16:06:20.287242   23626 cni.go:84] Creating CNI manager for ""
	I0213 16:06:20.287261   23626 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 16:06:20.287275   23626 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 16:06:20.287331   23626 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-743000 NodeName:embed-certs-743000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 16:06:20.287467   23626 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "embed-certs-743000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 16:06:20.287543   23626 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=embed-certs-743000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-743000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 16:06:20.287609   23626 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0213 16:06:20.302762   23626 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 16:06:20.302836   23626 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 16:06:20.317564   23626 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0213 16:06:20.346877   23626 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 16:06:20.376172   23626 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0213 16:06:20.406213   23626 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0213 16:06:20.410978   23626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 16:06:20.428844   23626 certs.go:56] Setting up /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/embed-certs-743000 for IP: 192.168.76.2
	I0213 16:06:20.428866   23626 certs.go:190] acquiring lock for shared ca certs: {Name:mkc037f48c69539d66bd92ede4890b05c28518b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 16:06:20.429063   23626 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.key
	I0213 16:06:20.429193   23626 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18169-6320/.minikube/proxy-client-ca.key
	I0213 16:06:20.429302   23626 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/embed-certs-743000/client.key
	I0213 16:06:20.429474   23626 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/embed-certs-743000/apiserver.key.31bdca25
	I0213 16:06:20.429551   23626 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/embed-certs-743000/proxy-client.key
	I0213 16:06:20.429802   23626 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/6776.pem (1338 bytes)
	W0213 16:06:20.429883   23626 certs.go:433] ignoring /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/6776_empty.pem, impossibly tiny 0 bytes
	I0213 16:06:20.429892   23626 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 16:06:20.429944   23626 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem (1078 bytes)
	I0213 16:06:20.430007   23626 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem (1123 bytes)
	I0213 16:06:20.430047   23626 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/key.pem (1675 bytes)
	I0213 16:06:20.430113   23626 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem (1708 bytes)
	I0213 16:06:20.430692   23626 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/embed-certs-743000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 16:06:20.471073   23626 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/embed-certs-743000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 16:06:20.511957   23626 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/embed-certs-743000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 16:06:20.553143   23626 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/embed-certs-743000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 16:06:20.595224   23626 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 16:06:20.635802   23626 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0213 16:06:20.676828   23626 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 16:06:20.719255   23626 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 16:06:20.759961   23626 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/6776.pem --> /usr/share/ca-certificates/6776.pem (1338 bytes)
	I0213 16:06:20.806646   23626 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem --> /usr/share/ca-certificates/67762.pem (1708 bytes)
	I0213 16:06:20.861110   23626 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 16:06:20.911561   23626 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 16:06:20.945828   23626 ssh_runner.go:195] Run: openssl version
	I0213 16:06:20.952967   23626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67762.pem && ln -fs /usr/share/ca-certificates/67762.pem /etc/ssl/certs/67762.pem"
	I0213 16:06:20.971276   23626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67762.pem
	I0213 16:06:20.976368   23626 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 23:02 /usr/share/ca-certificates/67762.pem
	I0213 16:06:20.976433   23626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67762.pem
	I0213 16:06:20.983894   23626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67762.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 16:06:21.001150   23626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 16:06:21.021344   23626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 16:06:21.026032   23626 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 22:54 /usr/share/ca-certificates/minikubeCA.pem
	I0213 16:06:21.026086   23626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 16:06:21.033670   23626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 16:06:21.049333   23626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6776.pem && ln -fs /usr/share/ca-certificates/6776.pem /etc/ssl/certs/6776.pem"
	I0213 16:06:21.065920   23626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6776.pem
	I0213 16:06:21.071411   23626 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 23:02 /usr/share/ca-certificates/6776.pem
	I0213 16:06:21.071471   23626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6776.pem
	I0213 16:06:21.079658   23626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6776.pem /etc/ssl/certs/51391683.0"
	I0213 16:06:21.099027   23626 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 16:06:21.104207   23626 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 16:06:21.114659   23626 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 16:06:21.122076   23626 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 16:06:21.128918   23626 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 16:06:21.136026   23626 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 16:06:21.143786   23626 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 16:06:21.150906   23626 kubeadm.go:404] StartCluster: {Name:embed-certs-743000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-743000 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStri
ng:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 16:06:21.151037   23626 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 16:06:21.171991   23626 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 16:06:21.188440   23626 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 16:06:21.188460   23626 kubeadm.go:636] restartCluster start
	I0213 16:06:21.188519   23626 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 16:06:21.205634   23626 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:06:21.205738   23626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-743000
	I0213 16:06:21.261506   23626 kubeconfig.go:135] verify returned: extract IP: "embed-certs-743000" does not appear in /Users/jenkins/minikube-integration/18169-6320/kubeconfig
	I0213 16:06:21.261680   23626 kubeconfig.go:146] "embed-certs-743000" context is missing from /Users/jenkins/minikube-integration/18169-6320/kubeconfig - will repair!
	I0213 16:06:21.262013   23626 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/kubeconfig: {Name:mk44cd4b9e88d1002bf6fa3af05bfaa649323b25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 16:06:21.263371   23626 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 16:06:21.278727   23626 api_server.go:166] Checking apiserver status ...
	I0213 16:06:21.278782   23626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:06:21.294784   23626 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:06:20.757937   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:20.775401   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:20.794812   23204 logs.go:276] 0 containers: []
	W0213 16:06:20.794828   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:20.794899   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:20.817500   23204 logs.go:276] 0 containers: []
	W0213 16:06:20.817518   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:20.817597   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:20.865196   23204 logs.go:276] 0 containers: []
	W0213 16:06:20.865210   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:20.865268   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:20.885292   23204 logs.go:276] 0 containers: []
	W0213 16:06:20.885308   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:20.885389   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:20.907484   23204 logs.go:276] 0 containers: []
	W0213 16:06:20.907498   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:20.907567   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:20.928848   23204 logs.go:276] 0 containers: []
	W0213 16:06:20.928862   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:20.928944   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:20.948712   23204 logs.go:276] 0 containers: []
	W0213 16:06:20.948726   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:20.948794   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:20.972289   23204 logs.go:276] 0 containers: []
	W0213 16:06:20.972302   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:20.972313   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:20.972324   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:21.021004   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:21.021031   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:21.042321   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:21.042337   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:21.118911   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:21.118922   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:21.118930   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:21.141618   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:21.141634   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:23.710574   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:23.728656   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:23.747854   23204 logs.go:276] 0 containers: []
	W0213 16:06:23.747867   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:23.747958   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:23.765397   23204 logs.go:276] 0 containers: []
	W0213 16:06:23.765411   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:23.765475   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:23.784188   23204 logs.go:276] 0 containers: []
	W0213 16:06:23.784200   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:23.784265   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:23.803280   23204 logs.go:276] 0 containers: []
	W0213 16:06:23.803292   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:23.803357   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:23.822599   23204 logs.go:276] 0 containers: []
	W0213 16:06:23.822613   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:23.822679   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:23.840718   23204 logs.go:276] 0 containers: []
	W0213 16:06:23.840732   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:23.840797   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:23.860100   23204 logs.go:276] 0 containers: []
	W0213 16:06:23.860114   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:23.860237   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:23.879133   23204 logs.go:276] 0 containers: []
	W0213 16:06:23.879148   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:23.879155   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:23.879161   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:23.926130   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:23.926147   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:23.945951   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:23.946010   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:24.013479   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:24.013491   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:24.013519   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:24.036470   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:24.036487   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:21.778942   23626 api_server.go:166] Checking apiserver status ...
	I0213 16:06:21.779041   23626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:06:21.795912   23626 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:06:22.280042   23626 api_server.go:166] Checking apiserver status ...
	I0213 16:06:22.280203   23626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:06:22.298535   23626 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:06:22.778819   23626 api_server.go:166] Checking apiserver status ...
	I0213 16:06:22.778988   23626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:06:22.796927   23626 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:06:23.280131   23626 api_server.go:166] Checking apiserver status ...
	I0213 16:06:23.280262   23626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:06:23.299479   23626 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:06:23.778729   23626 api_server.go:166] Checking apiserver status ...
	I0213 16:06:23.778849   23626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:06:23.797091   23626 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:06:24.280220   23626 api_server.go:166] Checking apiserver status ...
	I0213 16:06:24.280336   23626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:06:24.297551   23626 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:06:24.779252   23626 api_server.go:166] Checking apiserver status ...
	I0213 16:06:24.779319   23626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:06:24.796608   23626 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:06:25.278820   23626 api_server.go:166] Checking apiserver status ...
	I0213 16:06:25.279004   23626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:06:25.297321   23626 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:06:25.779008   23626 api_server.go:166] Checking apiserver status ...
	I0213 16:06:25.779141   23626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:06:25.796454   23626 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:06:26.278714   23626 api_server.go:166] Checking apiserver status ...
	I0213 16:06:26.278817   23626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:06:26.295782   23626 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:06:26.606918   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:26.625318   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:26.643082   23204 logs.go:276] 0 containers: []
	W0213 16:06:26.643096   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:26.643172   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:26.662047   23204 logs.go:276] 0 containers: []
	W0213 16:06:26.662061   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:26.662137   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:26.681290   23204 logs.go:276] 0 containers: []
	W0213 16:06:26.681303   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:26.681367   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:26.699193   23204 logs.go:276] 0 containers: []
	W0213 16:06:26.699207   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:26.699292   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:26.717664   23204 logs.go:276] 0 containers: []
	W0213 16:06:26.717678   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:26.717743   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:26.733881   23204 logs.go:276] 0 containers: []
	W0213 16:06:26.733904   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:26.733984   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:26.752332   23204 logs.go:276] 0 containers: []
	W0213 16:06:26.752347   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:26.752413   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:26.771883   23204 logs.go:276] 0 containers: []
	W0213 16:06:26.771898   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:26.771906   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:26.771927   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:26.816963   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:26.816975   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:26.837223   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:26.837269   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:26.900361   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:26.900376   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:26.900384   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:26.923407   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:26.923422   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:29.486289   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:29.504817   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:29.524064   23204 logs.go:276] 0 containers: []
	W0213 16:06:29.524079   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:29.524146   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:29.542407   23204 logs.go:276] 0 containers: []
	W0213 16:06:29.542421   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:29.542489   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:29.564073   23204 logs.go:276] 0 containers: []
	W0213 16:06:29.564090   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:29.564169   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:29.584218   23204 logs.go:276] 0 containers: []
	W0213 16:06:29.584231   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:29.584295   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:29.603097   23204 logs.go:276] 0 containers: []
	W0213 16:06:29.603113   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:29.603189   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:29.621373   23204 logs.go:276] 0 containers: []
	W0213 16:06:29.621389   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:29.621456   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:29.641120   23204 logs.go:276] 0 containers: []
	W0213 16:06:29.641140   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:29.641214   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:29.659507   23204 logs.go:276] 0 containers: []
	W0213 16:06:29.659519   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:29.659527   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:29.659541   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:29.704150   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:29.704167   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:29.724757   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:29.724793   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:29.794272   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:29.794283   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:29.794291   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:29.816362   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:29.816377   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:26.779088   23626 api_server.go:166] Checking apiserver status ...
	I0213 16:06:26.779162   23626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:06:26.796628   23626 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:06:27.278692   23626 api_server.go:166] Checking apiserver status ...
	I0213 16:06:27.278788   23626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:06:27.295685   23626 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:06:27.780787   23626 api_server.go:166] Checking apiserver status ...
	I0213 16:06:27.780920   23626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:06:27.798279   23626 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:06:28.280648   23626 api_server.go:166] Checking apiserver status ...
	I0213 16:06:28.280751   23626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:06:28.297885   23626 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:06:28.779039   23626 api_server.go:166] Checking apiserver status ...
	I0213 16:06:28.779123   23626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:06:28.795791   23626 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:06:29.280760   23626 api_server.go:166] Checking apiserver status ...
	I0213 16:06:29.280890   23626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:06:29.299016   23626 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:06:29.779455   23626 api_server.go:166] Checking apiserver status ...
	I0213 16:06:29.779635   23626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:06:29.799340   23626 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:06:30.278999   23626 api_server.go:166] Checking apiserver status ...
	I0213 16:06:30.279093   23626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:06:30.296688   23626 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:06:30.779371   23626 api_server.go:166] Checking apiserver status ...
	I0213 16:06:30.779525   23626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:06:30.797148   23626 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:06:31.279579   23626 api_server.go:166] Checking apiserver status ...
	I0213 16:06:31.279824   23626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:06:31.297385   23626 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:06:31.297400   23626 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0213 16:06:31.297413   23626 kubeadm.go:1135] stopping kube-system containers ...
	I0213 16:06:31.297479   23626 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 16:06:31.317439   23626 docker.go:483] Stopping containers: [4b647859582a b35c9bdd9529 48293fb599ad c38f727fd938 f01d5c2353f2 cfa435511f87 e5dd46140e0a bdea6fd743ab e49e01cb84c3 98d3c008da60 dfdba27d5e45 cb770d7507a4 44af1f40f578 898de47116a6 f8c2e35e8706]
	I0213 16:06:31.317524   23626 ssh_runner.go:195] Run: docker stop 4b647859582a b35c9bdd9529 48293fb599ad c38f727fd938 f01d5c2353f2 cfa435511f87 e5dd46140e0a bdea6fd743ab e49e01cb84c3 98d3c008da60 dfdba27d5e45 cb770d7507a4 44af1f40f578 898de47116a6 f8c2e35e8706
	I0213 16:06:31.337270   23626 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 16:06:31.355589   23626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 16:06:31.370566   23626 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Feb 14 00:04 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Feb 14 00:04 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2011 Feb 14 00:04 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Feb 14 00:04 /etc/kubernetes/scheduler.conf
	
	I0213 16:06:31.370633   23626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0213 16:06:31.385496   23626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0213 16:06:31.400303   23626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0213 16:06:31.414958   23626 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:06:31.415015   23626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0213 16:06:31.431488   23626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0213 16:06:31.446611   23626 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:06:31.446678   23626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0213 16:06:31.461166   23626 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 16:06:31.476427   23626 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 16:06:31.476443   23626 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 16:06:32.380949   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:32.398390   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:32.419831   23204 logs.go:276] 0 containers: []
	W0213 16:06:32.419849   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:32.419932   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:32.440511   23204 logs.go:276] 0 containers: []
	W0213 16:06:32.440529   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:32.440645   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:32.459350   23204 logs.go:276] 0 containers: []
	W0213 16:06:32.459367   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:32.459476   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:32.479152   23204 logs.go:276] 0 containers: []
	W0213 16:06:32.479168   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:32.479257   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:32.500470   23204 logs.go:276] 0 containers: []
	W0213 16:06:32.500486   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:32.500594   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:32.523244   23204 logs.go:276] 0 containers: []
	W0213 16:06:32.523267   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:32.523368   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:32.548253   23204 logs.go:276] 0 containers: []
	W0213 16:06:32.548272   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:32.548376   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:32.573899   23204 logs.go:276] 0 containers: []
	W0213 16:06:32.573918   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:32.573933   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:32.573946   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:32.638927   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:32.638947   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:32.660545   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:32.660629   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:32.737721   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:32.737737   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:32.737746   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:32.764410   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:32.764429   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:31.531470   23626 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 16:06:32.322498   23626 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 16:06:32.463749   23626 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 16:06:32.532758   23626 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 16:06:32.705810   23626 api_server.go:52] waiting for apiserver process to appear ...
	I0213 16:06:32.705937   23626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:33.206898   23626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:33.705984   23626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:33.732174   23626 api_server.go:72] duration metric: took 1.026386729s to wait for apiserver process to appear ...
	I0213 16:06:33.732211   23626 api_server.go:88] waiting for apiserver healthz status ...
	I0213 16:06:33.732229   23626 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56811/healthz ...
	I0213 16:06:33.733986   23626 api_server.go:269] stopped: https://127.0.0.1:56811/healthz: Get "https://127.0.0.1:56811/healthz": EOF
	I0213 16:06:34.233095   23626 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56811/healthz ...
	I0213 16:06:36.315538   23626 api_server.go:279] https://127.0.0.1:56811/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 16:06:36.315565   23626 api_server.go:103] status: https://127.0.0.1:56811/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 16:06:36.315574   23626 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56811/healthz ...
	I0213 16:06:36.403255   23626 api_server.go:279] https://127.0.0.1:56811/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 16:06:36.403293   23626 api_server.go:103] status: https://127.0.0.1:56811/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 16:06:36.733077   23626 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56811/healthz ...
	I0213 16:06:36.739515   23626 api_server.go:279] https://127.0.0.1:56811/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 16:06:36.739541   23626 api_server.go:103] status: https://127.0.0.1:56811/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 16:06:37.232376   23626 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56811/healthz ...
	I0213 16:06:37.305786   23626 api_server.go:279] https://127.0.0.1:56811/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 16:06:37.305818   23626 api_server.go:103] status: https://127.0.0.1:56811/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 16:06:37.732245   23626 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56811/healthz ...
	I0213 16:06:37.739578   23626 api_server.go:279] https://127.0.0.1:56811/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 16:06:37.739604   23626 api_server.go:103] status: https://127.0.0.1:56811/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 16:06:38.232516   23626 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56811/healthz ...
	I0213 16:06:38.237612   23626 api_server.go:279] https://127.0.0.1:56811/healthz returned 200:
	ok
	I0213 16:06:38.245086   23626 api_server.go:141] control plane version: v1.28.4
	I0213 16:06:38.245103   23626 api_server.go:131] duration metric: took 4.512984966s to wait for apiserver health ...
	I0213 16:06:38.245111   23626 cni.go:84] Creating CNI manager for ""
	I0213 16:06:38.245126   23626 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 16:06:38.269268   23626 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 16:06:35.335925   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:35.354753   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:35.377553   23204 logs.go:276] 0 containers: []
	W0213 16:06:35.377569   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:35.377663   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:35.400589   23204 logs.go:276] 0 containers: []
	W0213 16:06:35.400606   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:35.400685   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:35.422229   23204 logs.go:276] 0 containers: []
	W0213 16:06:35.422246   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:35.422317   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:35.443969   23204 logs.go:276] 0 containers: []
	W0213 16:06:35.443983   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:35.444055   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:35.467060   23204 logs.go:276] 0 containers: []
	W0213 16:06:35.467076   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:35.467145   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:35.488796   23204 logs.go:276] 0 containers: []
	W0213 16:06:35.488814   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:35.488910   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:35.510311   23204 logs.go:276] 0 containers: []
	W0213 16:06:35.510326   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:35.510405   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:35.535580   23204 logs.go:276] 0 containers: []
	W0213 16:06:35.535597   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:35.535605   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:35.535612   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:35.583314   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:35.583334   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:35.606200   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:35.606217   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:35.681475   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:35.681486   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:35.681499   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:35.705786   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:35.705815   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:38.288536   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:38.308492   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:38.331473   23204 logs.go:276] 0 containers: []
	W0213 16:06:38.331525   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:38.331615   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:38.355412   23204 logs.go:276] 0 containers: []
	W0213 16:06:38.355428   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:38.355507   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:38.380139   23204 logs.go:276] 0 containers: []
	W0213 16:06:38.380179   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:38.380284   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:38.404699   23204 logs.go:276] 0 containers: []
	W0213 16:06:38.404727   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:38.404830   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:38.430046   23204 logs.go:276] 0 containers: []
	W0213 16:06:38.430062   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:38.430131   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:38.451994   23204 logs.go:276] 0 containers: []
	W0213 16:06:38.452011   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:38.452085   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:38.475507   23204 logs.go:276] 0 containers: []
	W0213 16:06:38.475523   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:38.475605   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:38.496331   23204 logs.go:276] 0 containers: []
	W0213 16:06:38.496346   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:38.496354   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:38.496360   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:38.545370   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:38.545391   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:38.567965   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:38.567981   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:38.639688   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:38.639702   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:38.639714   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:38.661187   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:38.661202   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:38.290996   23626 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 16:06:38.308315   23626 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 16:06:38.345669   23626 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 16:06:38.356530   23626 system_pods.go:59] 8 kube-system pods found
	I0213 16:06:38.356547   23626 system_pods.go:61] "coredns-5dd5756b68-wpmhv" [2f465628-913e-4a1a-ad0f-eef349c79e0a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0213 16:06:38.356554   23626 system_pods.go:61] "etcd-embed-certs-743000" [b22704e3-9a8d-407d-af82-b75c5dca2694] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 16:06:38.356559   23626 system_pods.go:61] "kube-apiserver-embed-certs-743000" [eeb5abba-fd73-4e8b-884c-727553380d65] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 16:06:38.356565   23626 system_pods.go:61] "kube-controller-manager-embed-certs-743000" [8fc69b83-0587-4de8-b18f-12b9bd8590da] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 16:06:38.356572   23626 system_pods.go:61] "kube-proxy-zh28s" [1a27e1f4-9125-4505-9897-21efa665d319] Running
	I0213 16:06:38.356578   23626 system_pods.go:61] "kube-scheduler-embed-certs-743000" [8fef7367-65e2-437b-94ec-baac61b3d65e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 16:06:38.356584   23626 system_pods.go:61] "metrics-server-57f55c9bc5-sqmnm" [cdb95a4b-d9f6-4389-8735-5792adc38803] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 16:06:38.356589   23626 system_pods.go:61] "storage-provisioner" [cab68d35-d81d-41a4-b028-b1306977054c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0213 16:06:38.356593   23626 system_pods.go:74] duration metric: took 10.909777ms to wait for pod list to return data ...
	I0213 16:06:38.356598   23626 node_conditions.go:102] verifying NodePressure condition ...
	I0213 16:06:38.360119   23626 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0213 16:06:38.360139   23626 node_conditions.go:123] node cpu capacity is 12
	I0213 16:06:38.360149   23626 node_conditions.go:105] duration metric: took 3.547412ms to run NodePressure ...
	I0213 16:06:38.360160   23626 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 16:06:38.568321   23626 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0213 16:06:38.573260   23626 kubeadm.go:787] kubelet initialised
	I0213 16:06:38.573272   23626 kubeadm.go:788] duration metric: took 4.940148ms waiting for restarted kubelet to initialise ...
	I0213 16:06:38.573279   23626 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 16:06:38.579677   23626 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-wpmhv" in "kube-system" namespace to be "Ready" ...
	I0213 16:06:40.588171   23626 pod_ready.go:102] pod "coredns-5dd5756b68-wpmhv" in "kube-system" namespace has status "Ready":"False"
	I0213 16:06:41.230491   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:41.248063   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:41.267063   23204 logs.go:276] 0 containers: []
	W0213 16:06:41.267077   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:41.267153   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:41.285751   23204 logs.go:276] 0 containers: []
	W0213 16:06:41.285765   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:41.285831   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:41.303981   23204 logs.go:276] 0 containers: []
	W0213 16:06:41.303995   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:41.304061   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:41.322783   23204 logs.go:276] 0 containers: []
	W0213 16:06:41.322797   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:41.322872   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:41.341906   23204 logs.go:276] 0 containers: []
	W0213 16:06:41.341920   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:41.341985   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:41.361943   23204 logs.go:276] 0 containers: []
	W0213 16:06:41.361957   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:41.362029   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:41.380925   23204 logs.go:276] 0 containers: []
	W0213 16:06:41.380940   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:41.381008   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:41.401210   23204 logs.go:276] 0 containers: []
	W0213 16:06:41.401225   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:41.401233   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:41.401243   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:41.444527   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:41.444549   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:41.465064   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:41.465102   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:41.531320   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:41.531334   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:41.531341   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:41.552924   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:41.552940   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:44.119663   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:44.137105   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:44.177327   23204 logs.go:276] 0 containers: []
	W0213 16:06:44.177344   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:44.177409   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:44.196667   23204 logs.go:276] 0 containers: []
	W0213 16:06:44.196681   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:44.196748   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:44.217301   23204 logs.go:276] 0 containers: []
	W0213 16:06:44.217316   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:44.217392   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:44.237862   23204 logs.go:276] 0 containers: []
	W0213 16:06:44.237875   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:44.237954   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:44.256685   23204 logs.go:276] 0 containers: []
	W0213 16:06:44.256699   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:44.256784   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:44.276758   23204 logs.go:276] 0 containers: []
	W0213 16:06:44.276772   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:44.276846   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:44.296838   23204 logs.go:276] 0 containers: []
	W0213 16:06:44.296851   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:44.296918   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:44.317038   23204 logs.go:276] 0 containers: []
	W0213 16:06:44.317051   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:44.317058   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:44.317064   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:44.338549   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:44.338564   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:44.403691   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:44.403706   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:44.447056   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:44.447071   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:44.468062   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:44.468145   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:44.533082   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:43.086971   23626 pod_ready.go:102] pod "coredns-5dd5756b68-wpmhv" in "kube-system" namespace has status "Ready":"False"
	I0213 16:06:45.586680   23626 pod_ready.go:102] pod "coredns-5dd5756b68-wpmhv" in "kube-system" namespace has status "Ready":"False"
	I0213 16:06:46.086692   23626 pod_ready.go:92] pod "coredns-5dd5756b68-wpmhv" in "kube-system" namespace has status "Ready":"True"
	I0213 16:06:46.086705   23626 pod_ready.go:81] duration metric: took 7.507174997s waiting for pod "coredns-5dd5756b68-wpmhv" in "kube-system" namespace to be "Ready" ...
	I0213 16:06:46.086715   23626 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-743000" in "kube-system" namespace to be "Ready" ...
	I0213 16:06:47.034230   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:47.052074   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:47.070932   23204 logs.go:276] 0 containers: []
	W0213 16:06:47.070946   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:47.071010   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:47.090609   23204 logs.go:276] 0 containers: []
	W0213 16:06:47.090625   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:47.090701   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:47.110859   23204 logs.go:276] 0 containers: []
	W0213 16:06:47.110871   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:47.110932   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:47.130837   23204 logs.go:276] 0 containers: []
	W0213 16:06:47.130850   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:47.130921   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:47.150552   23204 logs.go:276] 0 containers: []
	W0213 16:06:47.150566   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:47.150642   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:47.171211   23204 logs.go:276] 0 containers: []
	W0213 16:06:47.171225   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:47.171294   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:47.192077   23204 logs.go:276] 0 containers: []
	W0213 16:06:47.192093   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:47.192158   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:47.211592   23204 logs.go:276] 0 containers: []
	W0213 16:06:47.211607   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:47.211614   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:47.211621   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:47.277233   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:47.277250   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:47.322830   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:47.322857   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:47.344753   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:47.344787   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:47.414534   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:47.414549   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:47.414560   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:49.937219   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:49.956116   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:49.976815   23204 logs.go:276] 0 containers: []
	W0213 16:06:49.976829   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:49.976895   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:49.996231   23204 logs.go:276] 0 containers: []
	W0213 16:06:49.996244   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:49.996327   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:50.016122   23204 logs.go:276] 0 containers: []
	W0213 16:06:50.016151   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:50.016216   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:50.034981   23204 logs.go:276] 0 containers: []
	W0213 16:06:50.034996   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:50.035067   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:50.055131   23204 logs.go:276] 0 containers: []
	W0213 16:06:50.055144   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:50.055210   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:50.074693   23204 logs.go:276] 0 containers: []
	W0213 16:06:50.074706   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:50.074768   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:50.094155   23204 logs.go:276] 0 containers: []
	W0213 16:06:50.094168   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:50.094260   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:50.117831   23204 logs.go:276] 0 containers: []
	W0213 16:06:50.117845   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:50.117851   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:50.117859   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:50.137877   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:50.137893   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:50.206221   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:50.206234   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:50.206248   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:48.101246   23626 pod_ready.go:102] pod "etcd-embed-certs-743000" in "kube-system" namespace has status "Ready":"False"
	I0213 16:06:49.592960   23626 pod_ready.go:92] pod "etcd-embed-certs-743000" in "kube-system" namespace has status "Ready":"True"
	I0213 16:06:49.592972   23626 pod_ready.go:81] duration metric: took 3.506327657s waiting for pod "etcd-embed-certs-743000" in "kube-system" namespace to be "Ready" ...
	I0213 16:06:49.592979   23626 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-743000" in "kube-system" namespace to be "Ready" ...
	I0213 16:06:51.099528   23626 pod_ready.go:92] pod "kube-apiserver-embed-certs-743000" in "kube-system" namespace has status "Ready":"True"
	I0213 16:06:51.099541   23626 pod_ready.go:81] duration metric: took 1.50658918s waiting for pod "kube-apiserver-embed-certs-743000" in "kube-system" namespace to be "Ready" ...
	I0213 16:06:51.099550   23626 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-743000" in "kube-system" namespace to be "Ready" ...
	I0213 16:06:50.227555   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:50.238810   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:50.308254   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:50.308270   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:52.853636   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:52.875237   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:52.895162   23204 logs.go:276] 0 containers: []
	W0213 16:06:52.895177   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:52.895243   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:52.922809   23204 logs.go:276] 0 containers: []
	W0213 16:06:52.922822   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:52.922882   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:52.943679   23204 logs.go:276] 0 containers: []
	W0213 16:06:52.943693   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:52.943767   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:52.964526   23204 logs.go:276] 0 containers: []
	W0213 16:06:52.964541   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:52.964610   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:52.983233   23204 logs.go:276] 0 containers: []
	W0213 16:06:52.983249   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:52.983322   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:53.005049   23204 logs.go:276] 0 containers: []
	W0213 16:06:53.005062   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:53.005130   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:53.026461   23204 logs.go:276] 0 containers: []
	W0213 16:06:53.026479   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:53.026553   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:53.046688   23204 logs.go:276] 0 containers: []
	W0213 16:06:53.046703   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:53.046716   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:53.046731   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:53.097708   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:53.097731   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:53.120490   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:53.120513   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:53.189929   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:53.189959   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:53.189967   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:53.212296   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:53.212315   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:51.606951   23626 pod_ready.go:92] pod "kube-controller-manager-embed-certs-743000" in "kube-system" namespace has status "Ready":"True"
	I0213 16:06:51.606963   23626 pod_ready.go:81] duration metric: took 507.41734ms waiting for pod "kube-controller-manager-embed-certs-743000" in "kube-system" namespace to be "Ready" ...
	I0213 16:06:51.606972   23626 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zh28s" in "kube-system" namespace to be "Ready" ...
	I0213 16:06:51.611651   23626 pod_ready.go:92] pod "kube-proxy-zh28s" in "kube-system" namespace has status "Ready":"True"
	I0213 16:06:51.611660   23626 pod_ready.go:81] duration metric: took 4.683103ms waiting for pod "kube-proxy-zh28s" in "kube-system" namespace to be "Ready" ...
	I0213 16:06:51.611667   23626 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-743000" in "kube-system" namespace to be "Ready" ...
	I0213 16:06:51.616385   23626 pod_ready.go:92] pod "kube-scheduler-embed-certs-743000" in "kube-system" namespace has status "Ready":"True"
	I0213 16:06:51.616395   23626 pod_ready.go:81] duration metric: took 4.716248ms waiting for pod "kube-scheduler-embed-certs-743000" in "kube-system" namespace to be "Ready" ...
	I0213 16:06:51.616401   23626 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace to be "Ready" ...
	I0213 16:06:53.622877   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:06:56.122515   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:06:55.779053   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:55.795938   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:55.815494   23204 logs.go:276] 0 containers: []
	W0213 16:06:55.815520   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:55.815601   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:55.834915   23204 logs.go:276] 0 containers: []
	W0213 16:06:55.834928   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:55.835000   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:55.853845   23204 logs.go:276] 0 containers: []
	W0213 16:06:55.853858   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:55.853929   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:55.872716   23204 logs.go:276] 0 containers: []
	W0213 16:06:55.872730   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:55.872799   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:55.891974   23204 logs.go:276] 0 containers: []
	W0213 16:06:55.891987   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:55.892051   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:55.912069   23204 logs.go:276] 0 containers: []
	W0213 16:06:55.912083   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:55.912150   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:55.933564   23204 logs.go:276] 0 containers: []
	W0213 16:06:55.933578   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:55.933647   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:55.953304   23204 logs.go:276] 0 containers: []
	W0213 16:06:55.953319   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:55.953326   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:55.953334   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:55.998111   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:55.998130   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:56.018937   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:56.018953   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:56.090168   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:56.090180   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:56.090197   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:56.114537   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:56.114555   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:58.699809   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:06:58.717085   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:06:58.736601   23204 logs.go:276] 0 containers: []
	W0213 16:06:58.736625   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:06:58.736713   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:06:58.756976   23204 logs.go:276] 0 containers: []
	W0213 16:06:58.756988   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:06:58.757058   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:06:58.775336   23204 logs.go:276] 0 containers: []
	W0213 16:06:58.775350   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:06:58.775420   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:06:58.794895   23204 logs.go:276] 0 containers: []
	W0213 16:06:58.794909   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:06:58.794973   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:06:58.813693   23204 logs.go:276] 0 containers: []
	W0213 16:06:58.813707   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:06:58.813779   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:06:58.834442   23204 logs.go:276] 0 containers: []
	W0213 16:06:58.834455   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:06:58.834526   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:06:58.853628   23204 logs.go:276] 0 containers: []
	W0213 16:06:58.853642   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:06:58.853709   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:06:58.874226   23204 logs.go:276] 0 containers: []
	W0213 16:06:58.874241   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:06:58.874249   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:06:58.874258   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:06:58.918908   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:06:58.918930   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:06:58.940967   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:06:58.940986   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:06:59.010750   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:06:59.010778   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:06:59.010788   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:06:59.032921   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:06:59.032938   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:06:58.124647   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:07:00.624503   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:07:01.670843   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:07:01.688847   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:07:01.707712   23204 logs.go:276] 0 containers: []
	W0213 16:07:01.707726   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:07:01.707798   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:07:01.726763   23204 logs.go:276] 0 containers: []
	W0213 16:07:01.726780   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:07:01.726853   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:07:01.745728   23204 logs.go:276] 0 containers: []
	W0213 16:07:01.745742   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:07:01.745809   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:07:01.764786   23204 logs.go:276] 0 containers: []
	W0213 16:07:01.764801   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:07:01.764865   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:07:01.784588   23204 logs.go:276] 0 containers: []
	W0213 16:07:01.784603   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:07:01.784667   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:07:01.804107   23204 logs.go:276] 0 containers: []
	W0213 16:07:01.804120   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:07:01.804186   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:07:01.824939   23204 logs.go:276] 0 containers: []
	W0213 16:07:01.824953   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:07:01.825020   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:07:01.847345   23204 logs.go:276] 0 containers: []
	W0213 16:07:01.847359   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:07:01.847368   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:07:01.847374   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:07:01.894045   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:07:01.894063   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:07:01.919018   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:07:01.919034   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:07:01.987690   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:07:01.987707   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:07:01.987717   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:07:02.009473   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:07:02.009489   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:07:04.576753   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:07:04.594769   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:07:04.612281   23204 logs.go:276] 0 containers: []
	W0213 16:07:04.612316   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:07:04.612411   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:07:04.632386   23204 logs.go:276] 0 containers: []
	W0213 16:07:04.632401   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:07:04.632467   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:07:04.653301   23204 logs.go:276] 0 containers: []
	W0213 16:07:04.653317   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:07:04.653381   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:07:04.672749   23204 logs.go:276] 0 containers: []
	W0213 16:07:04.672763   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:07:04.672832   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:07:04.693469   23204 logs.go:276] 0 containers: []
	W0213 16:07:04.693484   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:07:04.693557   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:07:04.712727   23204 logs.go:276] 0 containers: []
	W0213 16:07:04.712742   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:07:04.712828   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:07:04.731537   23204 logs.go:276] 0 containers: []
	W0213 16:07:04.731552   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:07:04.731618   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:07:04.750678   23204 logs.go:276] 0 containers: []
	W0213 16:07:04.750692   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:07:04.750699   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:07:04.750707   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:07:04.830164   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:07:04.830183   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:07:04.830199   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:07:04.873186   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:07:04.873203   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:07:04.980459   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:07:04.980476   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:07:05.026446   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:07:05.026465   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:07:02.625707   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:07:05.123272   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:07:07.547198   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:07:07.564676   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:07:07.584452   23204 logs.go:276] 0 containers: []
	W0213 16:07:07.584482   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:07:07.584546   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:07:07.603961   23204 logs.go:276] 0 containers: []
	W0213 16:07:07.603976   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:07:07.604043   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:07:07.625951   23204 logs.go:276] 0 containers: []
	W0213 16:07:07.625961   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:07:07.626027   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:07:07.645335   23204 logs.go:276] 0 containers: []
	W0213 16:07:07.645350   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:07:07.645433   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:07:07.664760   23204 logs.go:276] 0 containers: []
	W0213 16:07:07.664773   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:07:07.664840   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:07:07.685916   23204 logs.go:276] 0 containers: []
	W0213 16:07:07.685929   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:07:07.685996   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:07:07.705424   23204 logs.go:276] 0 containers: []
	W0213 16:07:07.705439   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:07:07.705507   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:07:07.725077   23204 logs.go:276] 0 containers: []
	W0213 16:07:07.725092   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:07:07.725099   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:07:07.725107   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:07:07.768365   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:07:07.768381   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:07:07.789020   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:07:07.789036   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:07:07.856687   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:07:07.856700   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:07:07.856722   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:07:07.878857   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:07:07.878872   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:07:07.624106   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:07:10.123394   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:07:10.446464   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:07:10.462730   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:07:10.481747   23204 logs.go:276] 0 containers: []
	W0213 16:07:10.481762   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:07:10.481827   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:07:10.501297   23204 logs.go:276] 0 containers: []
	W0213 16:07:10.501312   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:07:10.501378   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:07:10.521630   23204 logs.go:276] 0 containers: []
	W0213 16:07:10.521644   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:07:10.521708   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:07:10.540843   23204 logs.go:276] 0 containers: []
	W0213 16:07:10.540859   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:07:10.540927   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:07:10.561550   23204 logs.go:276] 0 containers: []
	W0213 16:07:10.561566   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:07:10.561635   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:07:10.581426   23204 logs.go:276] 0 containers: []
	W0213 16:07:10.581440   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:07:10.581506   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:07:10.602519   23204 logs.go:276] 0 containers: []
	W0213 16:07:10.602533   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:07:10.602599   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:07:10.623990   23204 logs.go:276] 0 containers: []
	W0213 16:07:10.624004   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:07:10.624012   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:07:10.624021   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:07:10.690490   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:07:10.690502   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:07:10.690524   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:07:10.713138   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:07:10.713151   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:07:10.780997   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:07:10.781012   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:07:10.823764   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:07:10.823779   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:07:13.346412   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:07:13.364742   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:07:13.385605   23204 logs.go:276] 0 containers: []
	W0213 16:07:13.385618   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:07:13.385684   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:07:13.404659   23204 logs.go:276] 0 containers: []
	W0213 16:07:13.404674   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:07:13.404741   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:07:13.424194   23204 logs.go:276] 0 containers: []
	W0213 16:07:13.424208   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:07:13.424276   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:07:13.444479   23204 logs.go:276] 0 containers: []
	W0213 16:07:13.444495   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:07:13.444579   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:07:13.463851   23204 logs.go:276] 0 containers: []
	W0213 16:07:13.463865   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:07:13.463929   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:07:13.482756   23204 logs.go:276] 0 containers: []
	W0213 16:07:13.482771   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:07:13.482836   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:07:13.501234   23204 logs.go:276] 0 containers: []
	W0213 16:07:13.501248   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:07:13.501317   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:07:13.521980   23204 logs.go:276] 0 containers: []
	W0213 16:07:13.522000   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:07:13.522009   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:07:13.522016   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:07:13.566636   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:07:13.566655   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:07:13.589727   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:07:13.589758   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:07:13.687645   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:07:13.687657   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:07:13.687666   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:07:13.709059   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:07:13.709074   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:07:12.622973   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:07:14.623575   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:07:16.278222   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:07:16.295645   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:07:16.314050   23204 logs.go:276] 0 containers: []
	W0213 16:07:16.314065   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:07:16.314151   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:07:16.333624   23204 logs.go:276] 0 containers: []
	W0213 16:07:16.333639   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:07:16.333707   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:07:16.352203   23204 logs.go:276] 0 containers: []
	W0213 16:07:16.352217   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:07:16.352287   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:07:16.371605   23204 logs.go:276] 0 containers: []
	W0213 16:07:16.371620   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:07:16.371685   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:07:16.391243   23204 logs.go:276] 0 containers: []
	W0213 16:07:16.391259   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:07:16.391324   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:07:16.410532   23204 logs.go:276] 0 containers: []
	W0213 16:07:16.410546   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:07:16.410611   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:07:16.432222   23204 logs.go:276] 0 containers: []
	W0213 16:07:16.432236   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:07:16.432347   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:07:16.452388   23204 logs.go:276] 0 containers: []
	W0213 16:07:16.452403   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:07:16.452410   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:07:16.452418   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:07:16.473970   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:07:16.473983   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:07:16.542437   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:07:16.542453   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:07:16.592402   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:07:16.592421   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:07:16.615435   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:07:16.615452   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:07:16.683684   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:07:19.185219   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:07:19.203205   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:07:19.222329   23204 logs.go:276] 0 containers: []
	W0213 16:07:19.222343   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:07:19.222408   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:07:19.240682   23204 logs.go:276] 0 containers: []
	W0213 16:07:19.240695   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:07:19.240761   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:07:19.259831   23204 logs.go:276] 0 containers: []
	W0213 16:07:19.259847   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:07:19.259922   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:07:19.279989   23204 logs.go:276] 0 containers: []
	W0213 16:07:19.280003   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:07:19.280069   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:07:19.298953   23204 logs.go:276] 0 containers: []
	W0213 16:07:19.298968   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:07:19.299037   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:07:19.319608   23204 logs.go:276] 0 containers: []
	W0213 16:07:19.319623   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:07:19.319687   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:07:19.338179   23204 logs.go:276] 0 containers: []
	W0213 16:07:19.338193   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:07:19.338258   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:07:19.357191   23204 logs.go:276] 0 containers: []
	W0213 16:07:19.357205   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:07:19.357212   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:07:19.357237   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:07:19.401102   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:07:19.401118   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:07:19.422599   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:07:19.422656   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:07:19.495266   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:07:19.495300   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:07:19.495308   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:07:19.517454   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:07:19.517494   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:07:17.123078   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:07:19.621692   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:07:22.088434   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:07:22.106810   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:07:22.126803   23204 logs.go:276] 0 containers: []
	W0213 16:07:22.126816   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:07:22.126880   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:07:22.148572   23204 logs.go:276] 0 containers: []
	W0213 16:07:22.148587   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:07:22.148665   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:07:22.170036   23204 logs.go:276] 0 containers: []
	W0213 16:07:22.170051   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:07:22.170115   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:07:22.189367   23204 logs.go:276] 0 containers: []
	W0213 16:07:22.189382   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:07:22.189449   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:07:22.208923   23204 logs.go:276] 0 containers: []
	W0213 16:07:22.208938   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:07:22.209008   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:07:22.229102   23204 logs.go:276] 0 containers: []
	W0213 16:07:22.229116   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:07:22.229184   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:07:22.248885   23204 logs.go:276] 0 containers: []
	W0213 16:07:22.248899   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:07:22.248963   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:07:22.267890   23204 logs.go:276] 0 containers: []
	W0213 16:07:22.267905   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:07:22.267912   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:07:22.267919   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:07:22.314020   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:07:22.314036   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:07:22.334595   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:07:22.334611   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:07:22.413063   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:07:22.413102   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:07:22.413124   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:07:22.434670   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:07:22.434684   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:07:24.999557   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:07:25.016568   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:07:25.037462   23204 logs.go:276] 0 containers: []
	W0213 16:07:25.037475   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:07:25.037547   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:07:25.058411   23204 logs.go:276] 0 containers: []
	W0213 16:07:25.058424   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:07:25.058492   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:07:25.079318   23204 logs.go:276] 0 containers: []
	W0213 16:07:25.079332   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:07:25.079404   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:07:25.102107   23204 logs.go:276] 0 containers: []
	W0213 16:07:25.102122   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:07:25.102214   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:07:25.122393   23204 logs.go:276] 0 containers: []
	W0213 16:07:25.122406   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:07:25.122471   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:07:25.142193   23204 logs.go:276] 0 containers: []
	W0213 16:07:25.142212   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:07:25.142320   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:07:25.169785   23204 logs.go:276] 0 containers: []
	W0213 16:07:25.169800   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:07:25.169881   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:07:25.189929   23204 logs.go:276] 0 containers: []
	W0213 16:07:25.189944   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:07:25.189951   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:07:25.189958   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:07:21.623328   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:07:24.122960   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:07:26.123347   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:07:25.236980   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:07:25.237000   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:07:25.258559   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:07:25.258575   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:07:25.336280   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:07:25.336319   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:07:25.336327   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:07:25.359020   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:07:25.359035   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 16:07:27.923949   23204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:07:27.941344   23204 kubeadm.go:640] restartCluster took 4m12.442168861s
	W0213 16:07:27.941388   23204 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0213 16:07:27.941407   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0213 16:07:28.367255   23204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 16:07:28.384676   23204 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 16:07:28.400446   23204 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0213 16:07:28.400537   23204 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 16:07:28.415605   23204 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 16:07:28.415636   23204 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0213 16:07:28.472081   23204 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0213 16:07:28.472580   23204 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 16:07:28.727540   23204 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 16:07:28.727715   23204 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 16:07:28.727804   23204 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 16:07:28.904696   23204 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 16:07:28.906558   23204 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 16:07:28.913307   23204 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0213 16:07:28.984848   23204 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 16:07:29.006643   23204 out.go:204]   - Generating certificates and keys ...
	I0213 16:07:29.006777   23204 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 16:07:29.006913   23204 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 16:07:29.007031   23204 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 16:07:29.007125   23204 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 16:07:29.007195   23204 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 16:07:29.007249   23204 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 16:07:29.007374   23204 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 16:07:29.007502   23204 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 16:07:29.007638   23204 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 16:07:29.007718   23204 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 16:07:29.007803   23204 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 16:07:29.007918   23204 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 16:07:29.373507   23204 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 16:07:29.623325   23204 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 16:07:29.737480   23204 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 16:07:29.926003   23204 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 16:07:29.926937   23204 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 16:07:29.948773   23204 out.go:204]   - Booting up control plane ...
	I0213 16:07:29.948903   23204 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 16:07:29.948984   23204 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 16:07:29.949038   23204 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 16:07:29.949096   23204 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 16:07:29.949220   23204 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 16:07:28.622094   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:07:30.622766   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:07:33.121690   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:07:35.121795   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:07:37.122302   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:07:39.624258   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:07:42.121698   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:07:44.622636   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:07:46.625067   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:07:49.121226   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:07:51.121646   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:07:53.122804   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:07:55.623126   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:07:57.623249   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:08:00.122564   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:08:02.621183   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:08:04.621432   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:08:09.935613   23204 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0213 16:08:09.936349   23204 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 16:08:09.936546   23204 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 16:08:06.623891   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:08:09.122400   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:08:14.937992   23204 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 16:08:14.938152   23204 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 16:08:11.623787   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:08:14.121094   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:08:16.122747   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:08:18.622377   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:08:21.120291   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:08:24.939211   23204 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 16:08:24.939460   23204 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 16:08:23.122323   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:08:25.621103   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:08:28.121072   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:08:30.620871   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:08:32.621227   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:08:35.120296   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:08:37.122633   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:08:39.623264   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:08:44.940517   23204 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 16:08:44.940747   23204 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 16:08:42.123430   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:08:44.620142   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:08:46.621642   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:08:48.622018   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:08:51.121179   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:08:53.621874   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:08:56.120153   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:08:58.619802   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:09:00.620990   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:09:03.119663   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:09:05.120147   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:09:07.137356   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:09:09.621130   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:09:12.119599   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:09:14.620134   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:09:16.620970   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:09:19.118755   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:09:21.118973   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:09:24.944560   23204 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 16:09:24.944714   23204 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 16:09:24.944726   23204 kubeadm.go:322] 
	I0213 16:09:24.944754   23204 kubeadm.go:322] Unfortunately, an error has occurred:
	I0213 16:09:24.944788   23204 kubeadm.go:322] 	timed out waiting for the condition
	I0213 16:09:24.944799   23204 kubeadm.go:322] 
	I0213 16:09:24.944829   23204 kubeadm.go:322] This error is likely caused by:
	I0213 16:09:24.944854   23204 kubeadm.go:322] 	- The kubelet is not running
	I0213 16:09:24.944939   23204 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0213 16:09:24.944949   23204 kubeadm.go:322] 
	I0213 16:09:24.945021   23204 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0213 16:09:24.945044   23204 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0213 16:09:24.945068   23204 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0213 16:09:24.945074   23204 kubeadm.go:322] 
	I0213 16:09:24.945171   23204 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0213 16:09:24.945256   23204 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0213 16:09:24.945325   23204 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0213 16:09:24.945369   23204 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0213 16:09:24.945424   23204 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0213 16:09:24.945450   23204 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0213 16:09:24.948369   23204 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0213 16:09:24.948433   23204 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0213 16:09:24.948548   23204 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0213 16:09:24.948633   23204 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 16:09:24.948707   23204 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0213 16:09:24.948764   23204 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0213 16:09:24.948838   23204 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0213 16:09:24.948870   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0213 16:09:25.381552   23204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 16:09:25.398785   23204 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0213 16:09:25.398853   23204 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 16:09:25.415534   23204 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 16:09:25.415561   23204 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0213 16:09:25.470972   23204 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0213 16:09:25.471013   23204 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 16:09:25.728855   23204 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 16:09:25.728999   23204 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 16:09:25.729111   23204 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 16:09:25.912400   23204 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 16:09:25.913158   23204 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 16:09:25.919829   23204 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0213 16:09:25.982141   23204 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 16:09:23.118954   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:09:25.123357   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:09:26.004839   23204 out.go:204]   - Generating certificates and keys ...
	I0213 16:09:26.004923   23204 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 16:09:26.004996   23204 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 16:09:26.005054   23204 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 16:09:26.005105   23204 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 16:09:26.005198   23204 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 16:09:26.005259   23204 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 16:09:26.005311   23204 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 16:09:26.005443   23204 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 16:09:26.005546   23204 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 16:09:26.005646   23204 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 16:09:26.005707   23204 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 16:09:26.005797   23204 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 16:09:26.109033   23204 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 16:09:26.222229   23204 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 16:09:26.361237   23204 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 16:09:26.518292   23204 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 16:09:26.518801   23204 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 16:09:26.548808   23204 out.go:204]   - Booting up control plane ...
	I0213 16:09:26.548949   23204 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 16:09:26.549087   23204 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 16:09:26.549176   23204 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 16:09:26.549321   23204 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 16:09:26.549634   23204 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 16:09:27.619883   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:09:29.620118   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:09:31.620762   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:09:34.119498   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:09:36.121640   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:09:38.621111   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:09:41.118810   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:09:43.119025   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:09:45.620005   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:09:47.620502   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:09:50.119644   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:09:52.119643   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:09:54.618814   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:09:56.621063   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:09:59.118718   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:10:01.618312   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:10:03.620317   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:10:05.620841   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:10:06.528417   23204 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0213 16:10:06.530098   23204 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 16:10:06.530405   23204 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 16:10:08.118184   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:10:10.119236   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:10:11.531782   23204 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 16:10:11.531959   23204 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 16:10:12.619549   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:10:15.118010   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:10:17.120441   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:10:19.619075   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:10:21.532836   23204 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 16:10:21.533022   23204 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 16:10:22.117896   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:10:24.119065   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:10:26.619982   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:10:29.119355   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:10:31.618819   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:10:33.619192   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:10:35.619826   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:10:38.119862   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:10:40.618470   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:10:41.535320   23204 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 16:10:41.535554   23204 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 16:10:42.621101   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:10:45.119000   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:10:47.617425   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:10:49.618243   23626 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace has status "Ready":"False"
	I0213 16:10:51.612819   23626 pod_ready.go:81] duration metric: took 4m0.001588693s waiting for pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace to be "Ready" ...
	E0213 16:10:51.612837   23626 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-sqmnm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0213 16:10:51.612860   23626 pod_ready.go:38] duration metric: took 4m13.045042646s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 16:10:51.612884   23626 kubeadm.go:640] restartCluster took 4m30.430261841s
	W0213 16:10:51.612927   23626 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0213 16:10:51.612945   23626 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0213 16:10:58.325343   23626 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (6.712531053s)
	I0213 16:10:58.325403   23626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 16:10:58.342785   23626 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 16:10:58.358416   23626 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0213 16:10:58.358491   23626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 16:10:58.374045   23626 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 16:10:58.374092   23626 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0213 16:10:58.422736   23626 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0213 16:10:58.422834   23626 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 16:10:58.548671   23626 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 16:10:58.548756   23626 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 16:10:58.548840   23626 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 16:10:58.847908   23626 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 16:10:58.873561   23626 out.go:204]   - Generating certificates and keys ...
	I0213 16:10:58.873639   23626 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 16:10:58.873693   23626 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 16:10:58.873768   23626 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 16:10:58.873828   23626 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 16:10:58.873890   23626 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 16:10:58.873941   23626 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 16:10:58.874006   23626 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 16:10:58.874069   23626 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 16:10:58.874135   23626 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 16:10:58.874211   23626 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 16:10:58.874249   23626 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 16:10:58.874306   23626 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 16:10:59.179389   23626 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 16:10:59.310055   23626 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 16:10:59.420636   23626 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 16:10:59.557941   23626 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 16:10:59.558250   23626 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 16:10:59.560197   23626 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 16:10:59.583052   23626 out.go:204]   - Booting up control plane ...
	I0213 16:10:59.583263   23626 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 16:10:59.583383   23626 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 16:10:59.583432   23626 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 16:10:59.583641   23626 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 16:10:59.583809   23626 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 16:10:59.583872   23626 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 16:10:59.645406   23626 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 16:11:05.146989   23626 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.501926 seconds
	I0213 16:11:05.147112   23626 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 16:11:05.156024   23626 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 16:11:05.673802   23626 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 16:11:05.674010   23626 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-743000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0213 16:11:06.181392   23626 kubeadm.go:322] [bootstrap-token] Using token: yciz3y.2s7pe0qbjboxqf7a
	I0213 16:11:06.218737   23626 out.go:204]   - Configuring RBAC rules ...
	I0213 16:11:06.218958   23626 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 16:11:06.221694   23626 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 16:11:06.260507   23626 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 16:11:06.262997   23626 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 16:11:06.265312   23626 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 16:11:06.268352   23626 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 16:11:06.276192   23626 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 16:11:06.400803   23626 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 16:11:06.628903   23626 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 16:11:06.629683   23626 kubeadm.go:322] 
	I0213 16:11:06.629812   23626 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 16:11:06.629822   23626 kubeadm.go:322] 
	I0213 16:11:06.629905   23626 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 16:11:06.629920   23626 kubeadm.go:322] 
	I0213 16:11:06.629984   23626 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 16:11:06.630130   23626 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 16:11:06.630203   23626 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 16:11:06.630212   23626 kubeadm.go:322] 
	I0213 16:11:06.630272   23626 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0213 16:11:06.630278   23626 kubeadm.go:322] 
	I0213 16:11:06.630325   23626 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0213 16:11:06.630333   23626 kubeadm.go:322] 
	I0213 16:11:06.630410   23626 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 16:11:06.630529   23626 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 16:11:06.630603   23626 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 16:11:06.630610   23626 kubeadm.go:322] 
	I0213 16:11:06.630698   23626 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 16:11:06.630806   23626 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 16:11:06.630818   23626 kubeadm.go:322] 
	I0213 16:11:06.630898   23626 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token yciz3y.2s7pe0qbjboxqf7a \
	I0213 16:11:06.631027   23626 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ec544454347b5e5d48e23ee1b9aa2810f9f410e5602199cd4da9ee9f3806dac7 \
	I0213 16:11:06.631061   23626 kubeadm.go:322] 	--control-plane 
	I0213 16:11:06.631074   23626 kubeadm.go:322] 
	I0213 16:11:06.631218   23626 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 16:11:06.631241   23626 kubeadm.go:322] 
	I0213 16:11:06.631353   23626 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token yciz3y.2s7pe0qbjboxqf7a \
	I0213 16:11:06.631504   23626 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ec544454347b5e5d48e23ee1b9aa2810f9f410e5602199cd4da9ee9f3806dac7 
	I0213 16:11:06.700261   23626 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0213 16:11:06.700440   23626 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 16:11:06.700462   23626 cni.go:84] Creating CNI manager for ""
	I0213 16:11:06.700477   23626 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 16:11:06.722658   23626 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 16:11:06.764996   23626 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 16:11:06.782720   23626 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 16:11:06.817236   23626 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 16:11:06.817304   23626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 16:11:06.817308   23626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=90664111bc55fed26ce3e984eae935c06b114802 minikube.k8s.io/name=embed-certs-743000 minikube.k8s.io/updated_at=2024_02_13T16_11_06_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 16:11:06.827592   23626 ops.go:34] apiserver oom_adj: -16
	I0213 16:11:06.919640   23626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 16:11:07.421150   23626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 16:11:07.920827   23626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 16:11:08.421330   23626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 16:11:08.920924   23626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 16:11:09.421241   23626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 16:11:09.920810   23626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 16:11:10.420848   23626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 16:11:10.920874   23626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 16:11:11.420835   23626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 16:11:11.921039   23626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 16:11:12.420708   23626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 16:11:12.920745   23626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 16:11:13.421257   23626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 16:11:13.920695   23626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 16:11:14.420785   23626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 16:11:14.920697   23626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 16:11:15.420833   23626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 16:11:15.920635   23626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 16:11:16.420978   23626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 16:11:16.920601   23626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 16:11:17.420567   23626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 16:11:17.920649   23626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 16:11:18.420632   23626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 16:11:18.920586   23626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 16:11:19.420580   23626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 16:11:19.920571   23626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 16:11:20.029860   23626 kubeadm.go:1088] duration metric: took 13.212898102s to wait for elevateKubeSystemPrivileges.
	I0213 16:11:20.029880   23626 kubeadm.go:406] StartCluster complete in 4m58.885441221s
	I0213 16:11:20.029898   23626 settings.go:142] acquiring lock: {Name:mk73e2877e5f833d3067188c2d2115030ace2af4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 16:11:20.029990   23626 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18169-6320/kubeconfig
	I0213 16:11:20.030810   23626 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/kubeconfig: {Name:mk44cd4b9e88d1002bf6fa3af05bfaa649323b25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 16:11:20.031214   23626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 16:11:20.031257   23626 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 16:11:20.031305   23626 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-743000"
	I0213 16:11:20.031321   23626 addons.go:69] Setting default-storageclass=true in profile "embed-certs-743000"
	I0213 16:11:20.031329   23626 addons.go:69] Setting metrics-server=true in profile "embed-certs-743000"
	I0213 16:11:20.031343   23626 addons.go:234] Setting addon metrics-server=true in "embed-certs-743000"
	I0213 16:11:20.031338   23626 addons.go:69] Setting dashboard=true in profile "embed-certs-743000"
	W0213 16:11:20.031355   23626 addons.go:243] addon metrics-server should already be in state true
	I0213 16:11:20.031365   23626 addons.go:234] Setting addon dashboard=true in "embed-certs-743000"
	W0213 16:11:20.031373   23626 addons.go:243] addon dashboard should already be in state true
	I0213 16:11:20.031349   23626 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-743000"
	I0213 16:11:20.031381   23626 config.go:182] Loaded profile config "embed-certs-743000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 16:11:20.031414   23626 host.go:66] Checking if "embed-certs-743000" exists ...
	I0213 16:11:20.031325   23626 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-743000"
	W0213 16:11:20.031438   23626 addons.go:243] addon storage-provisioner should already be in state true
	I0213 16:11:20.031441   23626 host.go:66] Checking if "embed-certs-743000" exists ...
	I0213 16:11:20.031478   23626 host.go:66] Checking if "embed-certs-743000" exists ...
	I0213 16:11:20.031764   23626 cli_runner.go:164] Run: docker container inspect embed-certs-743000 --format={{.State.Status}}
	I0213 16:11:20.032000   23626 cli_runner.go:164] Run: docker container inspect embed-certs-743000 --format={{.State.Status}}
	I0213 16:11:20.033072   23626 cli_runner.go:164] Run: docker container inspect embed-certs-743000 --format={{.State.Status}}
	I0213 16:11:20.033318   23626 cli_runner.go:164] Run: docker container inspect embed-certs-743000 --format={{.State.Status}}
	I0213 16:11:20.129357   23626 addons.go:234] Setting addon default-storageclass=true in "embed-certs-743000"
	W0213 16:11:20.156394   23626 addons.go:243] addon default-storageclass should already be in state true
	I0213 16:11:20.193044   23626 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0213 16:11:20.214186   23626 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 16:11:20.156355   23626 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0213 16:11:20.156424   23626 host.go:66] Checking if "embed-certs-743000" exists ...
	I0213 16:11:20.273217   23626 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 16:11:20.273682   23626 cli_runner.go:164] Run: docker container inspect embed-certs-743000 --format={{.State.Status}}
	I0213 16:11:20.332338   23626 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0213 16:11:20.354226   23626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 16:11:20.354263   23626 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0213 16:11:20.376401   23626 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0213 16:11:20.376409   23626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0213 16:11:20.376418   23626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0213 16:11:20.376491   23626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-743000
	I0213 16:11:20.376487   23626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-743000
	I0213 16:11:20.376491   23626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-743000
	I0213 16:11:20.396755   23626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0213 16:11:20.437580   23626 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 16:11:20.437597   23626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 16:11:20.437693   23626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-743000
	I0213 16:11:20.463657   23626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56812 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/embed-certs-743000/id_rsa Username:docker}
	I0213 16:11:20.463663   23626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56812 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/embed-certs-743000/id_rsa Username:docker}
	I0213 16:11:20.463657   23626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56812 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/embed-certs-743000/id_rsa Username:docker}
	I0213 16:11:20.501059   23626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56812 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/embed-certs-743000/id_rsa Username:docker}
	I0213 16:11:20.597010   23626 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-743000" context rescaled to 1 replicas
	I0213 16:11:20.597065   23626 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 16:11:20.636615   23626 out.go:177] * Verifying Kubernetes components...
	I0213 16:11:20.656470   23626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 16:11:20.914050   23626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 16:11:20.914901   23626 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0213 16:11:20.914945   23626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0213 16:11:20.915328   23626 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0213 16:11:20.915342   23626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0213 16:11:21.001838   23626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 16:11:21.103630   23626 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0213 16:11:21.103657   23626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0213 16:11:21.105468   23626 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0213 16:11:21.105483   23626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0213 16:11:21.309840   23626 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0213 16:11:21.309866   23626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0213 16:11:21.315596   23626 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 16:11:21.315615   23626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0213 16:11:21.426581   23626 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0213 16:11:21.426605   23626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0213 16:11:21.534963   23204 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0213 16:11:21.535176   23204 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0213 16:11:21.535189   23204 kubeadm.go:322] 
	I0213 16:11:21.535259   23204 kubeadm.go:322] Unfortunately, an error has occurred:
	I0213 16:11:21.535294   23204 kubeadm.go:322] 	timed out waiting for the condition
	I0213 16:11:21.535299   23204 kubeadm.go:322] 
	I0213 16:11:21.535323   23204 kubeadm.go:322] This error is likely caused by:
	I0213 16:11:21.535348   23204 kubeadm.go:322] 	- The kubelet is not running
	I0213 16:11:21.535437   23204 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0213 16:11:21.535447   23204 kubeadm.go:322] 
	I0213 16:11:21.535518   23204 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0213 16:11:21.535549   23204 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0213 16:11:21.535598   23204 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0213 16:11:21.535611   23204 kubeadm.go:322] 
	I0213 16:11:21.535719   23204 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0213 16:11:21.535833   23204 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0213 16:11:21.535954   23204 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0213 16:11:21.536018   23204 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0213 16:11:21.536119   23204 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0213 16:11:21.536146   23204 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0213 16:11:21.541193   23204 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0213 16:11:21.541292   23204 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0213 16:11:21.541420   23204 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0213 16:11:21.541537   23204 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 16:11:21.541659   23204 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0213 16:11:21.541779   23204 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0213 16:11:21.541823   23204 kubeadm.go:406] StartCluster complete in 8m6.083286501s
	I0213 16:11:21.541910   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 16:11:21.566985   23204 logs.go:276] 0 containers: []
	W0213 16:11:21.567021   23204 logs.go:278] No container was found matching "kube-apiserver"
	I0213 16:11:21.567086   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 16:11:21.587296   23204 logs.go:276] 0 containers: []
	W0213 16:11:21.587310   23204 logs.go:278] No container was found matching "etcd"
	I0213 16:11:21.587378   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 16:11:21.613957   23204 logs.go:276] 0 containers: []
	W0213 16:11:21.613976   23204 logs.go:278] No container was found matching "coredns"
	I0213 16:11:21.614072   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 16:11:21.638991   23204 logs.go:276] 0 containers: []
	W0213 16:11:21.639005   23204 logs.go:278] No container was found matching "kube-scheduler"
	I0213 16:11:21.639104   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 16:11:21.657743   23204 logs.go:276] 0 containers: []
	W0213 16:11:21.657757   23204 logs.go:278] No container was found matching "kube-proxy"
	I0213 16:11:21.657821   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 16:11:21.677535   23204 logs.go:276] 0 containers: []
	W0213 16:11:21.677551   23204 logs.go:278] No container was found matching "kube-controller-manager"
	I0213 16:11:21.677616   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 16:11:21.698531   23204 logs.go:276] 0 containers: []
	W0213 16:11:21.698559   23204 logs.go:278] No container was found matching "kindnet"
	I0213 16:11:21.698708   23204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0213 16:11:21.726649   23204 logs.go:276] 0 containers: []
	W0213 16:11:21.726688   23204 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0213 16:11:21.726705   23204 logs.go:123] Gathering logs for kubelet ...
	I0213 16:11:21.726735   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 16:11:21.773929   23204 logs.go:123] Gathering logs for dmesg ...
	I0213 16:11:21.773944   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 16:11:21.794784   23204 logs.go:123] Gathering logs for describe nodes ...
	I0213 16:11:21.794807   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0213 16:11:21.874118   23204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0213 16:11:21.874131   23204 logs.go:123] Gathering logs for Docker ...
	I0213 16:11:21.874156   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 16:11:21.899202   23204 logs.go:123] Gathering logs for container status ...
	I0213 16:11:21.899229   23204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0213 16:11:21.974248   23204 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0213 16:11:21.974270   23204 out.go:239] * 
	W0213 16:11:21.974321   23204 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0213 16:11:21.974335   23204 out.go:239] * 
	W0213 16:11:21.975059   23204 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 16:11:22.059690   23204 out.go:177] 
	W0213 16:11:22.101576   23204 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0213 16:11:22.101645   23204 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0213 16:11:22.101683   23204 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0213 16:11:22.122643   23204 out.go:177] 
	
	
	==> Docker <==
	Feb 14 00:03:02 old-k8s-version-745000 dockerd[715]: time="2024-02-14T00:03:02.561669846Z" level=info msg="Loading containers: start."
	Feb 14 00:03:02 old-k8s-version-745000 dockerd[715]: time="2024-02-14T00:03:02.650854264Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 14 00:03:02 old-k8s-version-745000 dockerd[715]: time="2024-02-14T00:03:02.688287423Z" level=info msg="Loading containers: done."
	Feb 14 00:03:02 old-k8s-version-745000 dockerd[715]: time="2024-02-14T00:03:02.697470883Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 14 00:03:02 old-k8s-version-745000 dockerd[715]: time="2024-02-14T00:03:02.697576732Z" level=info msg="Daemon has completed initialization"
	Feb 14 00:03:02 old-k8s-version-745000 dockerd[715]: time="2024-02-14T00:03:02.720967399Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 14 00:03:02 old-k8s-version-745000 systemd[1]: Started Docker Application Container Engine.
	Feb 14 00:03:02 old-k8s-version-745000 dockerd[715]: time="2024-02-14T00:03:02.721084588Z" level=info msg="API listen on [::]:2376"
	Feb 14 00:03:11 old-k8s-version-745000 systemd[1]: Stopping Docker Application Container Engine...
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[715]: time="2024-02-14T00:03:11.066588540Z" level=info msg="Processing signal 'terminated'"
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[715]: time="2024-02-14T00:03:11.067792543Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[715]: time="2024-02-14T00:03:11.068003985Z" level=info msg="Daemon shutdown complete"
	Feb 14 00:03:11 old-k8s-version-745000 systemd[1]: docker.service: Deactivated successfully.
	Feb 14 00:03:11 old-k8s-version-745000 systemd[1]: Stopped Docker Application Container Engine.
	Feb 14 00:03:11 old-k8s-version-745000 systemd[1]: Starting Docker Application Container Engine...
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[941]: time="2024-02-14T00:03:11.124939033Z" level=info msg="Starting up"
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[941]: time="2024-02-14T00:03:11.391957321Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[941]: time="2024-02-14T00:03:11.574713073Z" level=info msg="Loading containers: start."
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[941]: time="2024-02-14T00:03:11.666161350Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[941]: time="2024-02-14T00:03:11.703341083Z" level=info msg="Loading containers: done."
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[941]: time="2024-02-14T00:03:11.711453433Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[941]: time="2024-02-14T00:03:11.711511537Z" level=info msg="Daemon has completed initialization"
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[941]: time="2024-02-14T00:03:11.734314387Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[941]: time="2024-02-14T00:03:11.734519985Z" level=info msg="API listen on [::]:2376"
	Feb 14 00:03:11 old-k8s-version-745000 systemd[1]: Started Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2024-02-14T00:11:24Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 00:11:25 up  1:34,  0 users,  load average: 5.20, 5.03, 4.98
	Linux old-k8s-version-745000 6.6.12-linuxkit #1 SMP PREEMPT_DYNAMIC Tue Jan 30 09:48:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kubelet <==
	Feb 14 00:11:22 old-k8s-version-745000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 14 00:11:23 old-k8s-version-745000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 149.
	Feb 14 00:11:23 old-k8s-version-745000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 14 00:11:23 old-k8s-version-745000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 14 00:11:23 old-k8s-version-745000 kubelet[19459]: I0214 00:11:23.882941   19459 server.go:410] Version: v1.16.0
	Feb 14 00:11:23 old-k8s-version-745000 kubelet[19459]: I0214 00:11:23.883343   19459 plugins.go:100] No cloud provider specified.
	Feb 14 00:11:23 old-k8s-version-745000 kubelet[19459]: I0214 00:11:23.883356   19459 server.go:773] Client rotation is on, will bootstrap in background
	Feb 14 00:11:23 old-k8s-version-745000 kubelet[19459]: I0214 00:11:23.885281   19459 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 14 00:11:23 old-k8s-version-745000 kubelet[19459]: W0214 00:11:23.887987   19459 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 14 00:11:23 old-k8s-version-745000 kubelet[19459]: W0214 00:11:23.888060   19459 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 14 00:11:23 old-k8s-version-745000 kubelet[19459]: F0214 00:11:23.888085   19459 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 14 00:11:23 old-k8s-version-745000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 14 00:11:23 old-k8s-version-745000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 14 00:11:24 old-k8s-version-745000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 150.
	Feb 14 00:11:24 old-k8s-version-745000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 14 00:11:24 old-k8s-version-745000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 14 00:11:24 old-k8s-version-745000 kubelet[19564]: I0214 00:11:24.629847   19564 server.go:410] Version: v1.16.0
	Feb 14 00:11:24 old-k8s-version-745000 kubelet[19564]: I0214 00:11:24.630107   19564 plugins.go:100] No cloud provider specified.
	Feb 14 00:11:24 old-k8s-version-745000 kubelet[19564]: I0214 00:11:24.630117   19564 server.go:773] Client rotation is on, will bootstrap in background
	Feb 14 00:11:24 old-k8s-version-745000 kubelet[19564]: I0214 00:11:24.631741   19564 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 14 00:11:24 old-k8s-version-745000 kubelet[19564]: W0214 00:11:24.632457   19564 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 14 00:11:24 old-k8s-version-745000 kubelet[19564]: W0214 00:11:24.632525   19564 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 14 00:11:24 old-k8s-version-745000 kubelet[19564]: F0214 00:11:24.632546   19564 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 14 00:11:24 old-k8s-version-745000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 14 00:11:24 old-k8s-version-745000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-745000 -n old-k8s-version-745000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-745000 -n old-k8s-version-745000: exit status 2 (470.615452ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-745000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (510.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:12:51.271042    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/calico-208000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:13:05.108054    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/enable-default-cni-208000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:13:07.411326    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/no-preload-476000/client.crt: no such file or directory
E0213 16:13:08.900052    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/custom-flannel-208000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:13:32.540284    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/false-208000/client.crt: no such file or directory
E0213 16:13:35.097990    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/no-preload-476000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:14:17.222269    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:14:28.156588    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/enable-default-cni-208000/client.crt: no such file or directory
E0213 16:14:31.350764    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/flannel-208000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:14:42.607441    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/auto-208000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:14:55.761997    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/false-208000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:15:06.488960    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kindnet-208000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:15:14.372459    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:15:54.571080    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/flannel-208000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:16:14.255476    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubenet-208000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:16:28.396358    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/calico-208000/client.crt: no such file or directory
E0213 16:16:34.437042    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/bridge-208000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:16:37.432588    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 16:16:45.969559    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/custom-flannel-208000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:17:37.303978    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubenet-208000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:17:57.483501    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/bridge-208000/client.crt: no such file or directory
E0213 16:18:05.281322    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/enable-default-cni-208000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:18:07.585313    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/no-preload-476000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:19:17.398609    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:20:06.492724    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kindnet-208000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-745000 -n old-k8s-version-745000
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-745000 -n old-k8s-version-745000: exit status 2 (400.835586ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-745000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-745000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-745000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7",
	        "Created": "2024-02-13T23:56:55.870618044Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 384222,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-14T00:02:56.378811968Z",
	            "FinishedAt": "2024-02-14T00:02:53.615023812Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7/hostname",
	        "HostsPath": "/var/lib/docker/containers/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7/hosts",
	        "LogPath": "/var/lib/docker/containers/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7-json.log",
	        "Name": "/old-k8s-version-745000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-745000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-745000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3b6060eec3a1586a55a82fcdff198c231a72ae85ee6f6b4c6b13c556de95556d-init/diff:/var/lib/docker/overlay2/17d01b22a52da825ae58e67decfe3f4c8ae2f6fe80510c1be556e233e058ce7a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3b6060eec3a1586a55a82fcdff198c231a72ae85ee6f6b4c6b13c556de95556d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3b6060eec3a1586a55a82fcdff198c231a72ae85ee6f6b4c6b13c556de95556d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3b6060eec3a1586a55a82fcdff198c231a72ae85ee6f6b4c6b13c556de95556d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-745000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-745000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-745000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-745000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-745000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f2a64fcfa6aa11a20fff2e331cf5eccb1c94776e7c7a038087879a448cd30e88",
	            "SandboxKey": "/var/run/docker/netns/f2a64fcfa6aa",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56672"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56673"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56674"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56675"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56676"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-745000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2b4f372aa246",
	                        "old-k8s-version-745000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "e9fab362389ee13cca953b7169efcc99796a0092a501ddc4284447becaba8d37",
	                    "EndpointID": "591102cebfe18f51413c628ffec03eb73caab8e92285d1cbd8a06cabbd6bb2f8",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-745000",
	                        "2b4f372aa246"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-745000 -n old-k8s-version-745000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-745000 -n old-k8s-version-745000: exit status 2 (397.791384ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-745000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-745000 logs -n 25: (1.423374568s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p embed-certs-743000                                  | embed-certs-743000           | jenkins | v1.32.0 | 13 Feb 24 16:11 PST | 13 Feb 24 16:11 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-743000                                  | embed-certs-743000           | jenkins | v1.32.0 | 13 Feb 24 16:11 PST | 13 Feb 24 16:11 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-743000                                  | embed-certs-743000           | jenkins | v1.32.0 | 13 Feb 24 16:11 PST | 13 Feb 24 16:11 PST |
	| delete  | -p embed-certs-743000                                  | embed-certs-743000           | jenkins | v1.32.0 | 13 Feb 24 16:11 PST | 13 Feb 24 16:11 PST |
	| delete  | -p                                                     | disable-driver-mounts-253000 | jenkins | v1.32.0 | 13 Feb 24 16:11 PST | 13 Feb 24 16:11 PST |
	|         | disable-driver-mounts-253000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-788000 | jenkins | v1.32.0 | 13 Feb 24 16:11 PST | 13 Feb 24 16:12 PST |
	|         | default-k8s-diff-port-788000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-788000  | default-k8s-diff-port-788000 | jenkins | v1.32.0 | 13 Feb 24 16:12 PST | 13 Feb 24 16:12 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-788000 | jenkins | v1.32.0 | 13 Feb 24 16:12 PST | 13 Feb 24 16:12 PST |
	|         | default-k8s-diff-port-788000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-788000       | default-k8s-diff-port-788000 | jenkins | v1.32.0 | 13 Feb 24 16:12 PST | 13 Feb 24 16:12 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-788000 | jenkins | v1.32.0 | 13 Feb 24 16:12 PST | 13 Feb 24 16:18 PST |
	|         | default-k8s-diff-port-788000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-788000                           | default-k8s-diff-port-788000 | jenkins | v1.32.0 | 13 Feb 24 16:18 PST | 13 Feb 24 16:18 PST |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-788000 | jenkins | v1.32.0 | 13 Feb 24 16:18 PST | 13 Feb 24 16:18 PST |
	|         | default-k8s-diff-port-788000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-788000 | jenkins | v1.32.0 | 13 Feb 24 16:18 PST | 13 Feb 24 16:18 PST |
	|         | default-k8s-diff-port-788000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-788000 | jenkins | v1.32.0 | 13 Feb 24 16:18 PST | 13 Feb 24 16:18 PST |
	|         | default-k8s-diff-port-788000                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-788000 | jenkins | v1.32.0 | 13 Feb 24 16:18 PST | 13 Feb 24 16:18 PST |
	|         | default-k8s-diff-port-788000                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-926000 --memory=2200 --alsologtostderr   | newest-cni-926000            | jenkins | v1.32.0 | 13 Feb 24 16:18 PST | 13 Feb 24 16:19 PST |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.29.0-rc.2     |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-926000             | newest-cni-926000            | jenkins | v1.32.0 | 13 Feb 24 16:19 PST | 13 Feb 24 16:19 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-926000                                   | newest-cni-926000            | jenkins | v1.32.0 | 13 Feb 24 16:19 PST | 13 Feb 24 16:19 PST |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-926000                  | newest-cni-926000            | jenkins | v1.32.0 | 13 Feb 24 16:19 PST | 13 Feb 24 16:19 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-926000 --memory=2200 --alsologtostderr   | newest-cni-926000            | jenkins | v1.32.0 | 13 Feb 24 16:19 PST | 13 Feb 24 16:20 PST |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.29.0-rc.2     |                              |         |         |                     |                     |
	| image   | newest-cni-926000 image list                           | newest-cni-926000            | jenkins | v1.32.0 | 13 Feb 24 16:20 PST | 13 Feb 24 16:20 PST |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-926000                                   | newest-cni-926000            | jenkins | v1.32.0 | 13 Feb 24 16:20 PST | 13 Feb 24 16:20 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-926000                                   | newest-cni-926000            | jenkins | v1.32.0 | 13 Feb 24 16:20 PST | 13 Feb 24 16:20 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-926000                                   | newest-cni-926000            | jenkins | v1.32.0 | 13 Feb 24 16:20 PST | 13 Feb 24 16:20 PST |
	| delete  | -p newest-cni-926000                                   | newest-cni-926000            | jenkins | v1.32.0 | 13 Feb 24 16:20 PST | 13 Feb 24 16:20 PST |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 16:19:41
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 16:19:41.941790   24510 out.go:291] Setting OutFile to fd 1 ...
	I0213 16:19:41.942057   24510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 16:19:41.942062   24510 out.go:304] Setting ErrFile to fd 2...
	I0213 16:19:41.942067   24510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 16:19:41.942264   24510 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18169-6320/.minikube/bin
	I0213 16:19:41.943740   24510 out.go:298] Setting JSON to false
	I0213 16:19:41.966384   24510 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":6841,"bootTime":1707863140,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0213 16:19:41.966501   24510 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 16:19:41.988376   24510 out.go:177] * [newest-cni-926000] minikube v1.32.0 on Darwin 14.3.1
	I0213 16:19:42.009817   24510 out.go:177]   - MINIKUBE_LOCATION=18169
	I0213 16:19:42.009864   24510 notify.go:220] Checking for updates...
	I0213 16:19:42.052902   24510 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18169-6320/kubeconfig
	I0213 16:19:42.113771   24510 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0213 16:19:42.187874   24510 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 16:19:42.246098   24510 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18169-6320/.minikube
	I0213 16:19:42.321787   24510 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 16:19:42.361486   24510 config.go:182] Loaded profile config "newest-cni-926000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0213 16:19:42.362042   24510 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 16:19:42.418311   24510 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0213 16:19:42.418465   24510 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 16:19:42.529391   24510 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-14 00:19:42.518137114 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 16:19:42.551660   24510 out.go:177] * Using the docker driver based on existing profile
	I0213 16:19:42.594654   24510 start.go:298] selected driver: docker
	I0213 16:19:42.594710   24510 start.go:902] validating driver "docker" against &{Name:newest-cni-926000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-926000 Namespace:default APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Liste
nAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 16:19:42.594826   24510 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 16:19:42.599250   24510 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 16:19:42.706215   24510 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-14 00:19:42.695884533 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 16:19:42.706459   24510 start_flags.go:946] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0213 16:19:42.706511   24510 cni.go:84] Creating CNI manager for ""
	I0213 16:19:42.706523   24510 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 16:19:42.706535   24510 start_flags.go:321] config:
	{Name:newest-cni-926000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-926000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 16:19:42.750182   24510 out.go:177] * Starting control plane node newest-cni-926000 in cluster newest-cni-926000
	I0213 16:19:42.772111   24510 cache.go:121] Beginning downloading kic base image for docker with docker
	I0213 16:19:42.793778   24510 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0213 16:19:42.836168   24510 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0213 16:19:42.836219   24510 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0213 16:19:42.836252   24510 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0213 16:19:42.836270   24510 cache.go:56] Caching tarball of preloaded images
	I0213 16:19:42.836557   24510 preload.go:174] Found /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0213 16:19:42.836584   24510 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0213 16:19:42.837532   24510 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/newest-cni-926000/config.json ...
	I0213 16:19:42.887966   24510 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0213 16:19:42.887980   24510 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0213 16:19:42.888011   24510 cache.go:194] Successfully downloaded all kic artifacts
	I0213 16:19:42.888051   24510 start.go:365] acquiring machines lock for newest-cni-926000: {Name:mkf7d939bdf8afc10c2d68774a69fb4470edc0fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 16:19:42.888206   24510 start.go:369] acquired machines lock for "newest-cni-926000" in 126.662µs
	I0213 16:19:42.888241   24510 start.go:96] Skipping create...Using existing machine configuration
	I0213 16:19:42.888250   24510 fix.go:54] fixHost starting: 
	I0213 16:19:42.888501   24510 cli_runner.go:164] Run: docker container inspect newest-cni-926000 --format={{.State.Status}}
	I0213 16:19:42.939906   24510 fix.go:102] recreateIfNeeded on newest-cni-926000: state=Stopped err=<nil>
	W0213 16:19:42.939939   24510 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 16:19:42.961640   24510 out.go:177] * Restarting existing docker container for "newest-cni-926000" ...
	I0213 16:19:43.004631   24510 cli_runner.go:164] Run: docker start newest-cni-926000
	I0213 16:19:43.269221   24510 cli_runner.go:164] Run: docker container inspect newest-cni-926000 --format={{.State.Status}}
	I0213 16:19:43.330110   24510 kic.go:430] container "newest-cni-926000" state is running.
	I0213 16:19:43.330859   24510 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-926000
	I0213 16:19:43.393121   24510 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/newest-cni-926000/config.json ...
	I0213 16:19:43.393594   24510 machine.go:88] provisioning docker machine ...
	I0213 16:19:43.393618   24510 ubuntu.go:169] provisioning hostname "newest-cni-926000"
	I0213 16:19:43.393716   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:19:43.462353   24510 main.go:141] libmachine: Using SSH client type: native
	I0213 16:19:43.462800   24510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 57621 <nil> <nil>}
	I0213 16:19:43.462816   24510 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-926000 && echo "newest-cni-926000" | sudo tee /etc/hostname
	I0213 16:19:43.464106   24510 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0213 16:19:46.624239   24510 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-926000
	
	I0213 16:19:46.624323   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:19:46.675925   24510 main.go:141] libmachine: Using SSH client type: native
	I0213 16:19:46.676213   24510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 57621 <nil> <nil>}
	I0213 16:19:46.676226   24510 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-926000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-926000/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-926000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 16:19:46.816146   24510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 16:19:46.816164   24510 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/18169-6320/.minikube CaCertPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18169-6320/.minikube}
	I0213 16:19:46.816187   24510 ubuntu.go:177] setting up certificates
	I0213 16:19:46.816193   24510 provision.go:83] configureAuth start
	I0213 16:19:46.816264   24510 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-926000
	I0213 16:19:46.866343   24510 provision.go:138] copyHostCerts
	I0213 16:19:46.866452   24510 exec_runner.go:144] found /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.pem, removing ...
	I0213 16:19:46.866467   24510 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.pem
	I0213 16:19:46.866613   24510 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.pem (1078 bytes)
	I0213 16:19:46.866885   24510 exec_runner.go:144] found /Users/jenkins/minikube-integration/18169-6320/.minikube/cert.pem, removing ...
	I0213 16:19:46.866891   24510 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18169-6320/.minikube/cert.pem
	I0213 16:19:46.866964   24510 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18169-6320/.minikube/cert.pem (1123 bytes)
	I0213 16:19:46.867149   24510 exec_runner.go:144] found /Users/jenkins/minikube-integration/18169-6320/.minikube/key.pem, removing ...
	I0213 16:19:46.867155   24510 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18169-6320/.minikube/key.pem
	I0213 16:19:46.867229   24510 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18169-6320/.minikube/key.pem (1675 bytes)
	I0213 16:19:46.867370   24510 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca-key.pem org=jenkins.newest-cni-926000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-926000]
	I0213 16:19:47.008225   24510 provision.go:172] copyRemoteCerts
	I0213 16:19:47.008290   24510 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 16:19:47.008345   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:19:47.061564   24510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57621 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/newest-cni-926000/id_rsa Username:docker}
	I0213 16:19:47.163779   24510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 16:19:47.203341   24510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0213 16:19:47.250567   24510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0213 16:19:47.296212   24510 provision.go:86] duration metric: configureAuth took 479.996893ms
	I0213 16:19:47.296226   24510 ubuntu.go:193] setting minikube options for container-runtime
	I0213 16:19:47.296372   24510 config.go:182] Loaded profile config "newest-cni-926000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0213 16:19:47.296432   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:19:47.348718   24510 main.go:141] libmachine: Using SSH client type: native
	I0213 16:19:47.349034   24510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 57621 <nil> <nil>}
	I0213 16:19:47.349044   24510 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0213 16:19:47.488012   24510 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0213 16:19:47.488025   24510 ubuntu.go:71] root file system type: overlay
	I0213 16:19:47.488109   24510 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0213 16:19:47.488190   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:19:47.538865   24510 main.go:141] libmachine: Using SSH client type: native
	I0213 16:19:47.539163   24510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 57621 <nil> <nil>}
	I0213 16:19:47.539209   24510 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0213 16:19:47.702822   24510 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0213 16:19:47.702914   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:19:47.753894   24510 main.go:141] libmachine: Using SSH client type: native
	I0213 16:19:47.754191   24510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 57621 <nil> <nil>}
	I0213 16:19:47.754204   24510 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0213 16:19:47.907955   24510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 16:19:47.907974   24510 machine.go:91] provisioned docker machine in 4.514315212s
	I0213 16:19:47.907985   24510 start.go:300] post-start starting for "newest-cni-926000" (driver="docker")
	I0213 16:19:47.907994   24510 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 16:19:47.908073   24510 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 16:19:47.908128   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:19:47.961784   24510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57621 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/newest-cni-926000/id_rsa Username:docker}
	I0213 16:19:48.065345   24510 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 16:19:48.069523   24510 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0213 16:19:48.069549   24510 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0213 16:19:48.069557   24510 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0213 16:19:48.069562   24510 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0213 16:19:48.069571   24510 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18169-6320/.minikube/addons for local assets ...
	I0213 16:19:48.069674   24510 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18169-6320/.minikube/files for local assets ...
	I0213 16:19:48.069860   24510 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem -> 67762.pem in /etc/ssl/certs
	I0213 16:19:48.070073   24510 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 16:19:48.084929   24510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem --> /etc/ssl/certs/67762.pem (1708 bytes)
	I0213 16:19:48.125054   24510 start.go:303] post-start completed in 217.057155ms
	I0213 16:19:48.125128   24510 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0213 16:19:48.125198   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:19:48.176829   24510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57621 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/newest-cni-926000/id_rsa Username:docker}
	I0213 16:19:48.273141   24510 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0213 16:19:48.278132   24510 fix.go:56] fixHost completed within 5.389813035s
	I0213 16:19:48.278151   24510 start.go:83] releasing machines lock for "newest-cni-926000", held for 5.389862964s
	I0213 16:19:48.278254   24510 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-926000
	I0213 16:19:48.329860   24510 ssh_runner.go:195] Run: cat /version.json
	I0213 16:19:48.329876   24510 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 16:19:48.329924   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:19:48.329950   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:19:48.388448   24510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57621 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/newest-cni-926000/id_rsa Username:docker}
	I0213 16:19:48.388461   24510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57621 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/newest-cni-926000/id_rsa Username:docker}
	I0213 16:19:48.482525   24510 ssh_runner.go:195] Run: systemctl --version
	I0213 16:19:48.588079   24510 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0213 16:19:48.594236   24510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0213 16:19:48.623913   24510 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0213 16:19:48.623987   24510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 16:19:48.638879   24510 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0213 16:19:48.638896   24510 start.go:475] detecting cgroup driver to use...
	I0213 16:19:48.638909   24510 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 16:19:48.639020   24510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 16:19:48.666388   24510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0213 16:19:48.682374   24510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0213 16:19:48.699460   24510 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0213 16:19:48.699552   24510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0213 16:19:48.718052   24510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 16:19:48.736395   24510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0213 16:19:48.756354   24510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 16:19:48.775467   24510 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 16:19:48.791700   24510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0213 16:19:48.808244   24510 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 16:19:48.822964   24510 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 16:19:48.837975   24510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 16:19:48.898612   24510 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0213 16:19:48.986323   24510 start.go:475] detecting cgroup driver to use...
	I0213 16:19:48.986369   24510 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 16:19:48.986475   24510 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0213 16:19:49.007344   24510 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0213 16:19:49.007433   24510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0213 16:19:49.027656   24510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 16:19:49.062324   24510 ssh_runner.go:195] Run: which cri-dockerd
	I0213 16:19:49.066801   24510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0213 16:19:49.081972   24510 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0213 16:19:49.119201   24510 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0213 16:19:49.227464   24510 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0213 16:19:49.322450   24510 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0213 16:19:49.322599   24510 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0213 16:19:49.353345   24510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 16:19:49.414159   24510 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 16:19:49.690128   24510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0213 16:19:49.707371   24510 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0213 16:19:49.725782   24510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0213 16:19:49.742976   24510 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0213 16:19:49.804055   24510 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0213 16:19:49.865553   24510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 16:19:49.929568   24510 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0213 16:19:49.962166   24510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0213 16:19:49.979957   24510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 16:19:50.043245   24510 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0213 16:19:50.135417   24510 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0213 16:19:50.135522   24510 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0213 16:19:50.140345   24510 start.go:543] Will wait 60s for crictl version
	I0213 16:19:50.140399   24510 ssh_runner.go:195] Run: which crictl
	I0213 16:19:50.144771   24510 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 16:19:50.196726   24510 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0213 16:19:50.196815   24510 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 16:19:50.220756   24510 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 16:19:50.269553   24510 out.go:204] * Preparing Kubernetes v1.29.0-rc.2 on Docker 24.0.7 ...
	I0213 16:19:50.269670   24510 cli_runner.go:164] Run: docker exec -t newest-cni-926000 dig +short host.docker.internal
	I0213 16:19:50.390504   24510 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0213 16:19:50.390615   24510 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0213 16:19:50.395268   24510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 16:19:50.412276   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:19:50.485933   24510 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0213 16:19:50.508802   24510 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0213 16:19:50.508959   24510 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 16:19:50.531179   24510 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0213 16:19:50.531203   24510 docker.go:615] Images already preloaded, skipping extraction
	I0213 16:19:50.531304   24510 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 16:19:50.550958   24510 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0213 16:19:50.550990   24510 cache_images.go:84] Images are preloaded, skipping loading
	I0213 16:19:50.551075   24510 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0213 16:19:50.600500   24510 cni.go:84] Creating CNI manager for ""
	I0213 16:19:50.600518   24510 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 16:19:50.600531   24510 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0213 16:19:50.600547   24510 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-926000 NodeName:newest-cni-926000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs
:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 16:19:50.600662   24510 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-926000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 16:19:50.600725   24510 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-926000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-926000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 16:19:50.600786   24510 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0213 16:19:50.616147   24510 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 16:19:50.616292   24510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 16:19:50.632032   24510 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (420 bytes)
	I0213 16:19:50.660886   24510 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0213 16:19:50.689973   24510 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I0213 16:19:50.719143   24510 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0213 16:19:50.724406   24510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 16:19:50.741246   24510 certs.go:56] Setting up /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/newest-cni-926000 for IP: 192.168.76.2
	I0213 16:19:50.741270   24510 certs.go:190] acquiring lock for shared ca certs: {Name:mkc037f48c69539d66bd92ede4890b05c28518b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 16:19:50.741453   24510 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.key
	I0213 16:19:50.741529   24510 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18169-6320/.minikube/proxy-client-ca.key
	I0213 16:19:50.741649   24510 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/newest-cni-926000/client.key
	I0213 16:19:50.741794   24510 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/newest-cni-926000/apiserver.key.31bdca25
	I0213 16:19:50.741862   24510 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/newest-cni-926000/proxy-client.key
	I0213 16:19:50.742076   24510 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/6776.pem (1338 bytes)
	W0213 16:19:50.742131   24510 certs.go:433] ignoring /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/6776_empty.pem, impossibly tiny 0 bytes
	I0213 16:19:50.742141   24510 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 16:19:50.742177   24510 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem (1078 bytes)
	I0213 16:19:50.742224   24510 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem (1123 bytes)
	I0213 16:19:50.742253   24510 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/key.pem (1675 bytes)
	I0213 16:19:50.742353   24510 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem (1708 bytes)
	I0213 16:19:50.742982   24510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/newest-cni-926000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 16:19:50.783224   24510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/newest-cni-926000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 16:19:50.824438   24510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/newest-cni-926000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 16:19:50.866034   24510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/newest-cni-926000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0213 16:19:50.906244   24510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 16:19:50.946991   24510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0213 16:19:50.990609   24510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 16:19:51.038144   24510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 16:19:51.082361   24510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 16:19:51.123844   24510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/6776.pem --> /usr/share/ca-certificates/6776.pem (1338 bytes)
	I0213 16:19:51.165931   24510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem --> /usr/share/ca-certificates/67762.pem (1708 bytes)
	I0213 16:19:51.206940   24510 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 16:19:51.235805   24510 ssh_runner.go:195] Run: openssl version
	I0213 16:19:51.242558   24510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 16:19:51.260000   24510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 16:19:51.264427   24510 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 22:54 /usr/share/ca-certificates/minikubeCA.pem
	I0213 16:19:51.264475   24510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 16:19:51.271473   24510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 16:19:51.286816   24510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6776.pem && ln -fs /usr/share/ca-certificates/6776.pem /etc/ssl/certs/6776.pem"
	I0213 16:19:51.302864   24510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6776.pem
	I0213 16:19:51.307868   24510 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 23:02 /usr/share/ca-certificates/6776.pem
	I0213 16:19:51.307917   24510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6776.pem
	I0213 16:19:51.314430   24510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6776.pem /etc/ssl/certs/51391683.0"
	I0213 16:19:51.331592   24510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67762.pem && ln -fs /usr/share/ca-certificates/67762.pem /etc/ssl/certs/67762.pem"
	I0213 16:19:51.348154   24510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67762.pem
	I0213 16:19:51.352840   24510 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 23:02 /usr/share/ca-certificates/67762.pem
	I0213 16:19:51.352944   24510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67762.pem
	I0213 16:19:51.359906   24510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67762.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 16:19:51.375225   24510 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 16:19:51.379494   24510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 16:19:51.386185   24510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 16:19:51.393500   24510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 16:19:51.400097   24510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 16:19:51.406727   24510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 16:19:51.413430   24510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 16:19:51.421045   24510 kubeadm.go:404] StartCluster: {Name:newest-cni-926000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-926000 Namespace:default APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: S
ubnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 16:19:51.421243   24510 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 16:19:51.438776   24510 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 16:19:51.453979   24510 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 16:19:51.453998   24510 kubeadm.go:636] restartCluster start
	I0213 16:19:51.454047   24510 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 16:19:51.468658   24510 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:51.468753   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:19:51.522714   24510 kubeconfig.go:135] verify returned: extract IP: "newest-cni-926000" does not appear in /Users/jenkins/minikube-integration/18169-6320/kubeconfig
	I0213 16:19:51.522874   24510 kubeconfig.go:146] "newest-cni-926000" context is missing from /Users/jenkins/minikube-integration/18169-6320/kubeconfig - will repair!
	I0213 16:19:51.523187   24510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/kubeconfig: {Name:mk44cd4b9e88d1002bf6fa3af05bfaa649323b25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 16:19:51.524709   24510 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 16:19:51.540155   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:51.540229   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:51.556167   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:52.042231   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:52.042403   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:52.060746   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:52.542106   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:52.542207   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:52.558909   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:53.040293   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:53.040407   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:53.058046   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:53.542310   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:53.542551   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:53.561136   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:54.041063   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:54.041169   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:54.058068   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:54.542205   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:54.542364   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:54.560358   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:55.042270   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:55.042399   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:55.059308   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:55.540537   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:55.540660   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:55.558972   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:56.041064   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:56.041245   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:56.058869   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:56.541651   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:56.541730   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:56.558359   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:57.041992   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:57.042106   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:57.058895   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:57.541051   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:57.541177   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:57.559008   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:58.042385   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:58.042608   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:58.060986   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:58.540346   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:58.540499   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:58.557091   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:59.042375   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:59.042523   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:59.060050   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:59.540970   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:59.541138   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:59.559337   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:20:00.041076   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:20:00.041150   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:20:00.058366   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:20:00.542489   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:20:00.542596   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:20:00.560944   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:20:01.040355   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:20:01.040442   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:20:01.058108   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:20:01.541001   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:20:01.541104   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:20:01.558254   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:20:01.558270   24510 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0213 16:20:01.558286   24510 kubeadm.go:1135] stopping kube-system containers ...
	I0213 16:20:01.558358   24510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 16:20:01.578330   24510 docker.go:483] Stopping containers: [8691fd9793a6 491cf802856a 74f4e5c38ae8 d5c43f78eec6 b093aa5e412a 923ae21fcad8 661093fbacd4 8af7911a4da6 64196263fa8d 30ffc84347a4 5c78b52a5638 5b649ba8dbcb d7244c640093 b2219dfe6aa1 c18ee34fd84e]
	I0213 16:20:01.578423   24510 ssh_runner.go:195] Run: docker stop 8691fd9793a6 491cf802856a 74f4e5c38ae8 d5c43f78eec6 b093aa5e412a 923ae21fcad8 661093fbacd4 8af7911a4da6 64196263fa8d 30ffc84347a4 5c78b52a5638 5b649ba8dbcb d7244c640093 b2219dfe6aa1 c18ee34fd84e
	I0213 16:20:01.599527   24510 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 16:20:01.617540   24510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 16:20:01.632316   24510 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5651 Feb 14 00:19 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Feb 14 00:19 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Feb 14 00:19 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Feb 14 00:19 /etc/kubernetes/scheduler.conf
	
	I0213 16:20:01.632383   24510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0213 16:20:01.647825   24510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0213 16:20:01.662932   24510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0213 16:20:01.678009   24510 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:20:01.678079   24510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0213 16:20:01.693163   24510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0213 16:20:01.708125   24510 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:20:01.708184   24510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0213 16:20:01.723608   24510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 16:20:01.740672   24510 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 16:20:01.740695   24510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 16:20:01.805610   24510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 16:20:02.704983   24510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 16:20:02.838689   24510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 16:20:02.898002   24510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 16:20:03.004357   24510 api_server.go:52] waiting for apiserver process to appear ...
	I0213 16:20:03.004438   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:20:03.504583   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:20:04.004844   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:20:04.029901   24510 api_server.go:72] duration metric: took 1.02552605s to wait for apiserver process to appear ...
	I0213 16:20:04.029940   24510 api_server.go:88] waiting for apiserver healthz status ...
	I0213 16:20:04.029973   24510 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57620/healthz ...
	I0213 16:20:06.793285   24510 api_server.go:279] https://127.0.0.1:57620/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 16:20:06.793325   24510 api_server.go:103] status: https://127.0.0.1:57620/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 16:20:06.793339   24510 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57620/healthz ...
	I0213 16:20:06.802722   24510 api_server.go:279] https://127.0.0.1:57620/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 16:20:06.802771   24510 api_server.go:103] status: https://127.0.0.1:57620/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 16:20:07.030256   24510 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57620/healthz ...
	I0213 16:20:07.036814   24510 api_server.go:279] https://127.0.0.1:57620/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 16:20:07.036835   24510 api_server.go:103] status: https://127.0.0.1:57620/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 16:20:07.530171   24510 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57620/healthz ...
	I0213 16:20:07.538082   24510 api_server.go:279] https://127.0.0.1:57620/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 16:20:07.538101   24510 api_server.go:103] status: https://127.0.0.1:57620/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 16:20:08.030689   24510 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57620/healthz ...
	I0213 16:20:08.039238   24510 api_server.go:279] https://127.0.0.1:57620/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 16:20:08.039266   24510 api_server.go:103] status: https://127.0.0.1:57620/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 16:20:08.530120   24510 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57620/healthz ...
	I0213 16:20:08.535607   24510 api_server.go:279] https://127.0.0.1:57620/healthz returned 200:
	ok
	I0213 16:20:08.542053   24510 api_server.go:141] control plane version: v1.29.0-rc.2
	I0213 16:20:08.542067   24510 api_server.go:131] duration metric: took 4.512061537s to wait for apiserver health ...
	I0213 16:20:08.542075   24510 cni.go:84] Creating CNI manager for ""
	I0213 16:20:08.542085   24510 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 16:20:08.565586   24510 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 16:20:08.586490   24510 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 16:20:08.603362   24510 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 16:20:08.631771   24510 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 16:20:08.640763   24510 system_pods.go:59] 8 kube-system pods found
	I0213 16:20:08.640782   24510 system_pods.go:61] "coredns-76f75df574-n2fjt" [75e4b513-27ac-494f-837c-acc037c73f30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0213 16:20:08.640791   24510 system_pods.go:61] "etcd-newest-cni-926000" [8518fdf2-78dc-46bc-a387-aaff7faeae4c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 16:20:08.640796   24510 system_pods.go:61] "kube-apiserver-newest-cni-926000" [2b885b86-acc6-4ee3-be4b-6634cc674662] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 16:20:08.640801   24510 system_pods.go:61] "kube-controller-manager-newest-cni-926000" [27c92979-f6ad-41e4-b688-3f9f6c0baafb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 16:20:08.640807   24510 system_pods.go:61] "kube-proxy-rr5df" [3e0247bc-bafb-4bc8-831f-4dd3c74b1c4a] Running
	I0213 16:20:08.640812   24510 system_pods.go:61] "kube-scheduler-newest-cni-926000" [99f79f45-8da6-4609-8841-5fdf7c3f7aa0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 16:20:08.640817   24510 system_pods.go:61] "metrics-server-57f55c9bc5-jkgzd" [d1ab78c1-b022-4af9-bff7-62efcdf57c44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 16:20:08.640821   24510 system_pods.go:61] "storage-provisioner" [cc992d3f-3e9e-40e7-beef-1ffb833a2acd] Running
	I0213 16:20:08.640826   24510 system_pods.go:74] duration metric: took 9.044567ms to wait for pod list to return data ...
	I0213 16:20:08.640834   24510 node_conditions.go:102] verifying NodePressure condition ...
	I0213 16:20:08.644201   24510 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0213 16:20:08.644215   24510 node_conditions.go:123] node cpu capacity is 12
	I0213 16:20:08.644225   24510 node_conditions.go:105] duration metric: took 3.38773ms to run NodePressure ...
	I0213 16:20:08.644236   24510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 16:20:08.903641   24510 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 16:20:08.912143   24510 ops.go:34] apiserver oom_adj: -16
	I0213 16:20:08.912158   24510 kubeadm.go:640] restartCluster took 17.45793868s
	I0213 16:20:08.912165   24510 kubeadm.go:406] StartCluster complete in 17.490912738s
	I0213 16:20:08.912177   24510 settings.go:142] acquiring lock: {Name:mk73e2877e5f833d3067188c2d2115030ace2af4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 16:20:08.912250   24510 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18169-6320/kubeconfig
	I0213 16:20:08.912896   24510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/kubeconfig: {Name:mk44cd4b9e88d1002bf6fa3af05bfaa649323b25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 16:20:08.913174   24510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 16:20:08.913224   24510 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 16:20:08.913264   24510 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-926000"
	I0213 16:20:08.913278   24510 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-926000"
	I0213 16:20:08.913278   24510 addons.go:69] Setting dashboard=true in profile "newest-cni-926000"
	W0213 16:20:08.913283   24510 addons.go:243] addon storage-provisioner should already be in state true
	I0213 16:20:08.913290   24510 addons.go:234] Setting addon dashboard=true in "newest-cni-926000"
	I0213 16:20:08.913285   24510 addons.go:69] Setting default-storageclass=true in profile "newest-cni-926000"
	W0213 16:20:08.913294   24510 addons.go:243] addon dashboard should already be in state true
	I0213 16:20:08.913304   24510 config.go:182] Loaded profile config "newest-cni-926000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0213 16:20:08.913309   24510 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-926000"
	I0213 16:20:08.913317   24510 host.go:66] Checking if "newest-cni-926000" exists ...
	I0213 16:20:08.913316   24510 host.go:66] Checking if "newest-cni-926000" exists ...
	I0213 16:20:08.913351   24510 addons.go:69] Setting metrics-server=true in profile "newest-cni-926000"
	I0213 16:20:08.913378   24510 addons.go:234] Setting addon metrics-server=true in "newest-cni-926000"
	W0213 16:20:08.913389   24510 addons.go:243] addon metrics-server should already be in state true
	I0213 16:20:08.913432   24510 host.go:66] Checking if "newest-cni-926000" exists ...
	I0213 16:20:08.913590   24510 cli_runner.go:164] Run: docker container inspect newest-cni-926000 --format={{.State.Status}}
	I0213 16:20:08.913659   24510 cli_runner.go:164] Run: docker container inspect newest-cni-926000 --format={{.State.Status}}
	I0213 16:20:08.913788   24510 cli_runner.go:164] Run: docker container inspect newest-cni-926000 --format={{.State.Status}}
	I0213 16:20:08.914566   24510 cli_runner.go:164] Run: docker container inspect newest-cni-926000 --format={{.State.Status}}
	I0213 16:20:08.923914   24510 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-926000" context rescaled to 1 replicas
	I0213 16:20:08.923971   24510 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 16:20:08.945632   24510 out.go:177] * Verifying Kubernetes components...
	I0213 16:20:08.989545   24510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 16:20:09.023186   24510 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0213 16:20:09.003136   24510 addons.go:234] Setting addon default-storageclass=true in "newest-cni-926000"
	I0213 16:20:09.017656   24510 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0213 16:20:09.017680   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:20:09.044364   24510 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 16:20:09.044372   24510 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	W0213 16:20:09.044373   24510 addons.go:243] addon default-storageclass should already be in state true
	I0213 16:20:09.044395   24510 host.go:66] Checking if "newest-cni-926000" exists ...
	I0213 16:20:09.044405   24510 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0213 16:20:09.065679   24510 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 16:20:09.066057   24510 cli_runner.go:164] Run: docker container inspect newest-cni-926000 --format={{.State.Status}}
	I0213 16:20:09.086196   24510 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0213 16:20:09.086215   24510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0213 16:20:09.086268   24510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 16:20:09.107412   24510 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0213 16:20:09.107436   24510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0213 16:20:09.107479   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:20:09.107509   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:20:09.107533   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:20:09.116854   24510 api_server.go:52] waiting for apiserver process to appear ...
	I0213 16:20:09.117096   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:20:09.144236   24510 api_server.go:72] duration metric: took 220.214345ms to wait for apiserver process to appear ...
	I0213 16:20:09.144256   24510 api_server.go:88] waiting for apiserver healthz status ...
	I0213 16:20:09.144284   24510 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57620/healthz ...
	I0213 16:20:09.153688   24510 api_server.go:279] https://127.0.0.1:57620/healthz returned 200:
	ok
	I0213 16:20:09.155546   24510 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 16:20:09.155565   24510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 16:20:09.155673   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:20:09.156838   24510 api_server.go:141] control plane version: v1.29.0-rc.2
	I0213 16:20:09.156918   24510 api_server.go:131] duration metric: took 12.651851ms to wait for apiserver health ...
	I0213 16:20:09.156940   24510 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 16:20:09.165285   24510 system_pods.go:59] 8 kube-system pods found
	I0213 16:20:09.165314   24510 system_pods.go:61] "coredns-76f75df574-n2fjt" [75e4b513-27ac-494f-837c-acc037c73f30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0213 16:20:09.165326   24510 system_pods.go:61] "etcd-newest-cni-926000" [8518fdf2-78dc-46bc-a387-aaff7faeae4c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 16:20:09.165341   24510 system_pods.go:61] "kube-apiserver-newest-cni-926000" [2b885b86-acc6-4ee3-be4b-6634cc674662] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 16:20:09.165355   24510 system_pods.go:61] "kube-controller-manager-newest-cni-926000" [27c92979-f6ad-41e4-b688-3f9f6c0baafb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 16:20:09.165362   24510 system_pods.go:61] "kube-proxy-rr5df" [3e0247bc-bafb-4bc8-831f-4dd3c74b1c4a] Running
	I0213 16:20:09.165371   24510 system_pods.go:61] "kube-scheduler-newest-cni-926000" [99f79f45-8da6-4609-8841-5fdf7c3f7aa0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 16:20:09.165378   24510 system_pods.go:61] "metrics-server-57f55c9bc5-jkgzd" [d1ab78c1-b022-4af9-bff7-62efcdf57c44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 16:20:09.165384   24510 system_pods.go:61] "storage-provisioner" [cc992d3f-3e9e-40e7-beef-1ffb833a2acd] Running
	I0213 16:20:09.165392   24510 system_pods.go:74] duration metric: took 8.440107ms to wait for pod list to return data ...
	I0213 16:20:09.165399   24510 default_sa.go:34] waiting for default service account to be created ...
	I0213 16:20:09.169659   24510 default_sa.go:45] found service account: "default"
	I0213 16:20:09.169676   24510 default_sa.go:55] duration metric: took 4.27117ms for default service account to be created ...
	I0213 16:20:09.169684   24510 kubeadm.go:581] duration metric: took 245.66785ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0213 16:20:09.169697   24510 node_conditions.go:102] verifying NodePressure condition ...
	I0213 16:20:09.175089   24510 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0213 16:20:09.175121   24510 node_conditions.go:123] node cpu capacity is 12
	I0213 16:20:09.175139   24510 node_conditions.go:105] duration metric: took 5.436985ms to run NodePressure ...
	I0213 16:20:09.175159   24510 start.go:228] waiting for startup goroutines ...
	I0213 16:20:09.182298   24510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57621 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/newest-cni-926000/id_rsa Username:docker}
	I0213 16:20:09.182291   24510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57621 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/newest-cni-926000/id_rsa Username:docker}
	I0213 16:20:09.183549   24510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57621 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/newest-cni-926000/id_rsa Username:docker}
	I0213 16:20:09.225316   24510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57621 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/newest-cni-926000/id_rsa Username:docker}
	I0213 16:20:09.307302   24510 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0213 16:20:09.307301   24510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 16:20:09.307313   24510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0213 16:20:09.307684   24510 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0213 16:20:09.307695   24510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0213 16:20:09.344399   24510 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0213 16:20:09.344428   24510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0213 16:20:09.345040   24510 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0213 16:20:09.345052   24510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0213 16:20:09.352436   24510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 16:20:09.412807   24510 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 16:20:09.412826   24510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0213 16:20:09.414924   24510 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0213 16:20:09.414933   24510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0213 16:20:09.498056   24510 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0213 16:20:09.498069   24510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0213 16:20:09.498841   24510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 16:20:09.602210   24510 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0213 16:20:09.602229   24510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0213 16:20:09.700772   24510 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0213 16:20:09.700789   24510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0213 16:20:09.738703   24510 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0213 16:20:09.738744   24510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0213 16:20:09.824970   24510 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0213 16:20:09.824990   24510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0213 16:20:09.908251   24510 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0213 16:20:09.908268   24510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0213 16:20:09.943790   24510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0213 16:20:10.398321   24510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.090981133s)
	I0213 16:20:10.398353   24510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.045879571s)
	I0213 16:20:10.552796   24510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.053883843s)
	I0213 16:20:10.552836   24510 addons.go:470] Verifying addon metrics-server=true in "newest-cni-926000"
	I0213 16:20:10.818199   24510 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-926000 addons enable metrics-server
	
	I0213 16:20:10.876387   24510 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0213 16:20:10.935379   24510 addons.go:505] enable addons completed in 2.022140088s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0213 16:20:10.935417   24510 start.go:233] waiting for cluster config update ...
	I0213 16:20:10.935436   24510 start.go:242] writing updated cluster config ...
	I0213 16:20:10.935924   24510 ssh_runner.go:195] Run: rm -f paused
	I0213 16:20:10.981894   24510 start.go:600] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0213 16:20:11.003297   24510 out.go:177] * Done! kubectl is now configured to use "newest-cni-926000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 14 00:03:02 old-k8s-version-745000 dockerd[715]: time="2024-02-14T00:03:02.561669846Z" level=info msg="Loading containers: start."
	Feb 14 00:03:02 old-k8s-version-745000 dockerd[715]: time="2024-02-14T00:03:02.650854264Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 14 00:03:02 old-k8s-version-745000 dockerd[715]: time="2024-02-14T00:03:02.688287423Z" level=info msg="Loading containers: done."
	Feb 14 00:03:02 old-k8s-version-745000 dockerd[715]: time="2024-02-14T00:03:02.697470883Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 14 00:03:02 old-k8s-version-745000 dockerd[715]: time="2024-02-14T00:03:02.697576732Z" level=info msg="Daemon has completed initialization"
	Feb 14 00:03:02 old-k8s-version-745000 dockerd[715]: time="2024-02-14T00:03:02.720967399Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 14 00:03:02 old-k8s-version-745000 systemd[1]: Started Docker Application Container Engine.
	Feb 14 00:03:02 old-k8s-version-745000 dockerd[715]: time="2024-02-14T00:03:02.721084588Z" level=info msg="API listen on [::]:2376"
	Feb 14 00:03:11 old-k8s-version-745000 systemd[1]: Stopping Docker Application Container Engine...
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[715]: time="2024-02-14T00:03:11.066588540Z" level=info msg="Processing signal 'terminated'"
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[715]: time="2024-02-14T00:03:11.067792543Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[715]: time="2024-02-14T00:03:11.068003985Z" level=info msg="Daemon shutdown complete"
	Feb 14 00:03:11 old-k8s-version-745000 systemd[1]: docker.service: Deactivated successfully.
	Feb 14 00:03:11 old-k8s-version-745000 systemd[1]: Stopped Docker Application Container Engine.
	Feb 14 00:03:11 old-k8s-version-745000 systemd[1]: Starting Docker Application Container Engine...
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[941]: time="2024-02-14T00:03:11.124939033Z" level=info msg="Starting up"
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[941]: time="2024-02-14T00:03:11.391957321Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[941]: time="2024-02-14T00:03:11.574713073Z" level=info msg="Loading containers: start."
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[941]: time="2024-02-14T00:03:11.666161350Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[941]: time="2024-02-14T00:03:11.703341083Z" level=info msg="Loading containers: done."
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[941]: time="2024-02-14T00:03:11.711453433Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[941]: time="2024-02-14T00:03:11.711511537Z" level=info msg="Daemon has completed initialization"
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[941]: time="2024-02-14T00:03:11.734314387Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[941]: time="2024-02-14T00:03:11.734519985Z" level=info msg="API listen on [::]:2376"
	Feb 14 00:03:11 old-k8s-version-745000 systemd[1]: Started Docker Application Container Engine.
	
	
	==> container status <==
	time="2024-02-14T00:20:28Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 00:20:28 up  1:43,  0 users,  load average: 4.13, 4.98, 5.07
	Linux old-k8s-version-745000 6.6.12-linuxkit #1 SMP PREEMPT_DYNAMIC Tue Jan 30 09:48:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kubelet <==
	Feb 14 00:20:26 old-k8s-version-745000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 14 00:20:27 old-k8s-version-745000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 842.
	Feb 14 00:20:27 old-k8s-version-745000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 14 00:20:27 old-k8s-version-745000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 14 00:20:27 old-k8s-version-745000 kubelet[31363]: I0214 00:20:27.548690   31363 server.go:410] Version: v1.16.0
	Feb 14 00:20:27 old-k8s-version-745000 kubelet[31363]: I0214 00:20:27.548950   31363 plugins.go:100] No cloud provider specified.
	Feb 14 00:20:27 old-k8s-version-745000 kubelet[31363]: I0214 00:20:27.548969   31363 server.go:773] Client rotation is on, will bootstrap in background
	Feb 14 00:20:27 old-k8s-version-745000 kubelet[31363]: I0214 00:20:27.551159   31363 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 14 00:20:27 old-k8s-version-745000 kubelet[31363]: W0214 00:20:27.551989   31363 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 14 00:20:27 old-k8s-version-745000 kubelet[31363]: W0214 00:20:27.552071   31363 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 14 00:20:27 old-k8s-version-745000 kubelet[31363]: F0214 00:20:27.552101   31363 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 14 00:20:27 old-k8s-version-745000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 14 00:20:27 old-k8s-version-745000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 14 00:20:28 old-k8s-version-745000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 843.
	Feb 14 00:20:28 old-k8s-version-745000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 14 00:20:28 old-k8s-version-745000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 14 00:20:28 old-k8s-version-745000 kubelet[31450]: I0214 00:20:28.293706   31450 server.go:410] Version: v1.16.0
	Feb 14 00:20:28 old-k8s-version-745000 kubelet[31450]: I0214 00:20:28.293909   31450 plugins.go:100] No cloud provider specified.
	Feb 14 00:20:28 old-k8s-version-745000 kubelet[31450]: I0214 00:20:28.293917   31450 server.go:773] Client rotation is on, will bootstrap in background
	Feb 14 00:20:28 old-k8s-version-745000 kubelet[31450]: I0214 00:20:28.295475   31450 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 14 00:20:28 old-k8s-version-745000 kubelet[31450]: W0214 00:20:28.296300   31450 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 14 00:20:28 old-k8s-version-745000 kubelet[31450]: W0214 00:20:28.296372   31450 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 14 00:20:28 old-k8s-version-745000 kubelet[31450]: F0214 00:20:28.296402   31450 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 14 00:20:28 old-k8s-version-745000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 14 00:20:28 old-k8s-version-745000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-745000 -n old-k8s-version-745000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-745000 -n old-k8s-version-745000: exit status 2 (409.464897ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-745000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (385.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:21:14.259947    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubenet-208000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:21:28.401812    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/calico-208000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:21:34.442555    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/bridge-208000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:21:45.973893    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/custom-flannel-208000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:22:29.018248    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/default-k8s-diff-port-788000/client.crt: no such file or directory
E0213 16:22:29.024805    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/default-k8s-diff-port-788000/client.crt: no such file or directory
E0213 16:22:29.035646    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/default-k8s-diff-port-788000/client.crt: no such file or directory
E0213 16:22:29.056995    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/default-k8s-diff-port-788000/client.crt: no such file or directory
E0213 16:22:29.098648    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/default-k8s-diff-port-788000/client.crt: no such file or directory
E0213 16:22:29.180846    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/default-k8s-diff-port-788000/client.crt: no such file or directory
E0213 16:22:29.341055    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/default-k8s-diff-port-788000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:22:29.662398    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/default-k8s-diff-port-788000/client.crt: no such file or directory
E0213 16:22:30.302682    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/default-k8s-diff-port-788000/client.crt: no such file or directory
E0213 16:22:31.583570    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/default-k8s-diff-port-788000/client.crt: no such file or directory
E0213 16:22:34.144216    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/default-k8s-diff-port-788000/client.crt: no such file or directory
E0213 16:22:39.265085    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/default-k8s-diff-port-788000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:22:49.506607    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/default-k8s-diff-port-788000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:23:05.285155    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/enable-default-cni-208000/client.crt: no such file or directory
E0213 16:23:07.590477    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/no-preload-476000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:23:09.987313    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/default-k8s-diff-port-788000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:23:32.718499    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/false-208000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:23:50.949030    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/default-k8s-diff-port-788000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:24:17.402363    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:24:30.637456    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/no-preload-476000/client.crt: no such file or directory
E0213 16:24:31.530231    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/flannel-208000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:24:42.615341    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/auto-208000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:25:06.496402    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kindnet-208000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:25:12.872284    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/default-k8s-diff-port-788000/client.crt: no such file or directory
E0213 16:25:14.380826    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:26:14.264690    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubenet-208000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:26:28.405449    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/calico-208000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:26:34.444353    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/bridge-208000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0213 16:26:45.977903    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/custom-flannel-208000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:56676/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-745000 -n old-k8s-version-745000
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-745000 -n old-k8s-version-745000: exit status 2 (393.021454ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-745000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-745000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-745000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.594µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-745000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-745000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-745000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7",
	        "Created": "2024-02-13T23:56:55.870618044Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 384222,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-14T00:02:56.378811968Z",
	            "FinishedAt": "2024-02-14T00:02:53.615023812Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7/hostname",
	        "HostsPath": "/var/lib/docker/containers/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7/hosts",
	        "LogPath": "/var/lib/docker/containers/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7/2b4f372aa2467d6d5f725ec22664a522f6dc91951f9b73b544495bf22b4e63b7-json.log",
	        "Name": "/old-k8s-version-745000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-745000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-745000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3b6060eec3a1586a55a82fcdff198c231a72ae85ee6f6b4c6b13c556de95556d-init/diff:/var/lib/docker/overlay2/17d01b22a52da825ae58e67decfe3f4c8ae2f6fe80510c1be556e233e058ce7a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3b6060eec3a1586a55a82fcdff198c231a72ae85ee6f6b4c6b13c556de95556d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3b6060eec3a1586a55a82fcdff198c231a72ae85ee6f6b4c6b13c556de95556d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3b6060eec3a1586a55a82fcdff198c231a72ae85ee6f6b4c6b13c556de95556d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-745000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-745000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-745000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-745000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-745000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f2a64fcfa6aa11a20fff2e331cf5eccb1c94776e7c7a038087879a448cd30e88",
	            "SandboxKey": "/var/run/docker/netns/f2a64fcfa6aa",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56672"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56673"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56674"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56675"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56676"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-745000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2b4f372aa246",
	                        "old-k8s-version-745000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "e9fab362389ee13cca953b7169efcc99796a0092a501ddc4284447becaba8d37",
	                    "EndpointID": "591102cebfe18f51413c628ffec03eb73caab8e92285d1cbd8a06cabbd6bb2f8",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-745000",
	                        "2b4f372aa246"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-745000 -n old-k8s-version-745000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-745000 -n old-k8s-version-745000: exit status 2 (391.736116ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-745000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-745000 logs -n 25: (1.39579169s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p embed-certs-743000                                  | embed-certs-743000           | jenkins | v1.32.0 | 13 Feb 24 16:11 PST | 13 Feb 24 16:11 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-743000                                  | embed-certs-743000           | jenkins | v1.32.0 | 13 Feb 24 16:11 PST | 13 Feb 24 16:11 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-743000                                  | embed-certs-743000           | jenkins | v1.32.0 | 13 Feb 24 16:11 PST | 13 Feb 24 16:11 PST |
	| delete  | -p embed-certs-743000                                  | embed-certs-743000           | jenkins | v1.32.0 | 13 Feb 24 16:11 PST | 13 Feb 24 16:11 PST |
	| delete  | -p                                                     | disable-driver-mounts-253000 | jenkins | v1.32.0 | 13 Feb 24 16:11 PST | 13 Feb 24 16:11 PST |
	|         | disable-driver-mounts-253000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-788000 | jenkins | v1.32.0 | 13 Feb 24 16:11 PST | 13 Feb 24 16:12 PST |
	|         | default-k8s-diff-port-788000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-788000  | default-k8s-diff-port-788000 | jenkins | v1.32.0 | 13 Feb 24 16:12 PST | 13 Feb 24 16:12 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-788000 | jenkins | v1.32.0 | 13 Feb 24 16:12 PST | 13 Feb 24 16:12 PST |
	|         | default-k8s-diff-port-788000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-788000       | default-k8s-diff-port-788000 | jenkins | v1.32.0 | 13 Feb 24 16:12 PST | 13 Feb 24 16:12 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-788000 | jenkins | v1.32.0 | 13 Feb 24 16:12 PST | 13 Feb 24 16:18 PST |
	|         | default-k8s-diff-port-788000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-788000                           | default-k8s-diff-port-788000 | jenkins | v1.32.0 | 13 Feb 24 16:18 PST | 13 Feb 24 16:18 PST |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-788000 | jenkins | v1.32.0 | 13 Feb 24 16:18 PST | 13 Feb 24 16:18 PST |
	|         | default-k8s-diff-port-788000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-788000 | jenkins | v1.32.0 | 13 Feb 24 16:18 PST | 13 Feb 24 16:18 PST |
	|         | default-k8s-diff-port-788000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-788000 | jenkins | v1.32.0 | 13 Feb 24 16:18 PST | 13 Feb 24 16:18 PST |
	|         | default-k8s-diff-port-788000                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-788000 | jenkins | v1.32.0 | 13 Feb 24 16:18 PST | 13 Feb 24 16:18 PST |
	|         | default-k8s-diff-port-788000                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-926000 --memory=2200 --alsologtostderr   | newest-cni-926000            | jenkins | v1.32.0 | 13 Feb 24 16:18 PST | 13 Feb 24 16:19 PST |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.29.0-rc.2     |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-926000             | newest-cni-926000            | jenkins | v1.32.0 | 13 Feb 24 16:19 PST | 13 Feb 24 16:19 PST |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-926000                                   | newest-cni-926000            | jenkins | v1.32.0 | 13 Feb 24 16:19 PST | 13 Feb 24 16:19 PST |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-926000                  | newest-cni-926000            | jenkins | v1.32.0 | 13 Feb 24 16:19 PST | 13 Feb 24 16:19 PST |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-926000 --memory=2200 --alsologtostderr   | newest-cni-926000            | jenkins | v1.32.0 | 13 Feb 24 16:19 PST | 13 Feb 24 16:20 PST |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.29.0-rc.2     |                              |         |         |                     |                     |
	| image   | newest-cni-926000 image list                           | newest-cni-926000            | jenkins | v1.32.0 | 13 Feb 24 16:20 PST | 13 Feb 24 16:20 PST |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-926000                                   | newest-cni-926000            | jenkins | v1.32.0 | 13 Feb 24 16:20 PST | 13 Feb 24 16:20 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-926000                                   | newest-cni-926000            | jenkins | v1.32.0 | 13 Feb 24 16:20 PST | 13 Feb 24 16:20 PST |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-926000                                   | newest-cni-926000            | jenkins | v1.32.0 | 13 Feb 24 16:20 PST | 13 Feb 24 16:20 PST |
	| delete  | -p newest-cni-926000                                   | newest-cni-926000            | jenkins | v1.32.0 | 13 Feb 24 16:20 PST | 13 Feb 24 16:20 PST |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 16:19:41
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 16:19:41.941790   24510 out.go:291] Setting OutFile to fd 1 ...
	I0213 16:19:41.942057   24510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 16:19:41.942062   24510 out.go:304] Setting ErrFile to fd 2...
	I0213 16:19:41.942067   24510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 16:19:41.942264   24510 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18169-6320/.minikube/bin
	I0213 16:19:41.943740   24510 out.go:298] Setting JSON to false
	I0213 16:19:41.966384   24510 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":6841,"bootTime":1707863140,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0213 16:19:41.966501   24510 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 16:19:41.988376   24510 out.go:177] * [newest-cni-926000] minikube v1.32.0 on Darwin 14.3.1
	I0213 16:19:42.009817   24510 out.go:177]   - MINIKUBE_LOCATION=18169
	I0213 16:19:42.009864   24510 notify.go:220] Checking for updates...
	I0213 16:19:42.052902   24510 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18169-6320/kubeconfig
	I0213 16:19:42.113771   24510 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0213 16:19:42.187874   24510 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 16:19:42.246098   24510 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18169-6320/.minikube
	I0213 16:19:42.321787   24510 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 16:19:42.361486   24510 config.go:182] Loaded profile config "newest-cni-926000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0213 16:19:42.362042   24510 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 16:19:42.418311   24510 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0213 16:19:42.418465   24510 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 16:19:42.529391   24510 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-14 00:19:42.518137114 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 16:19:42.551660   24510 out.go:177] * Using the docker driver based on existing profile
	I0213 16:19:42.594654   24510 start.go:298] selected driver: docker
	I0213 16:19:42.594710   24510 start.go:902] validating driver "docker" against &{Name:newest-cni-926000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-926000 Namespace:default APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Liste
nAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 16:19:42.594826   24510 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 16:19:42.599250   24510 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 16:19:42.706215   24510 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-14 00:19:42.695884533 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 16:19:42.706459   24510 start_flags.go:946] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0213 16:19:42.706511   24510 cni.go:84] Creating CNI manager for ""
	I0213 16:19:42.706523   24510 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 16:19:42.706535   24510 start_flags.go:321] config:
	{Name:newest-cni-926000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-926000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 16:19:42.750182   24510 out.go:177] * Starting control plane node newest-cni-926000 in cluster newest-cni-926000
	I0213 16:19:42.772111   24510 cache.go:121] Beginning downloading kic base image for docker with docker
	I0213 16:19:42.793778   24510 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0213 16:19:42.836168   24510 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0213 16:19:42.836219   24510 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0213 16:19:42.836252   24510 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0213 16:19:42.836270   24510 cache.go:56] Caching tarball of preloaded images
	I0213 16:19:42.836557   24510 preload.go:174] Found /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0213 16:19:42.836584   24510 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0213 16:19:42.837532   24510 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/newest-cni-926000/config.json ...
	I0213 16:19:42.887966   24510 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0213 16:19:42.887980   24510 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0213 16:19:42.888011   24510 cache.go:194] Successfully downloaded all kic artifacts
	I0213 16:19:42.888051   24510 start.go:365] acquiring machines lock for newest-cni-926000: {Name:mkf7d939bdf8afc10c2d68774a69fb4470edc0fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 16:19:42.888206   24510 start.go:369] acquired machines lock for "newest-cni-926000" in 126.662µs
	I0213 16:19:42.888241   24510 start.go:96] Skipping create...Using existing machine configuration
	I0213 16:19:42.888250   24510 fix.go:54] fixHost starting: 
	I0213 16:19:42.888501   24510 cli_runner.go:164] Run: docker container inspect newest-cni-926000 --format={{.State.Status}}
	I0213 16:19:42.939906   24510 fix.go:102] recreateIfNeeded on newest-cni-926000: state=Stopped err=<nil>
	W0213 16:19:42.939939   24510 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 16:19:42.961640   24510 out.go:177] * Restarting existing docker container for "newest-cni-926000" ...
	I0213 16:19:43.004631   24510 cli_runner.go:164] Run: docker start newest-cni-926000
	I0213 16:19:43.269221   24510 cli_runner.go:164] Run: docker container inspect newest-cni-926000 --format={{.State.Status}}
	I0213 16:19:43.330110   24510 kic.go:430] container "newest-cni-926000" state is running.
	I0213 16:19:43.330859   24510 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-926000
	I0213 16:19:43.393121   24510 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/newest-cni-926000/config.json ...
	I0213 16:19:43.393594   24510 machine.go:88] provisioning docker machine ...
	I0213 16:19:43.393618   24510 ubuntu.go:169] provisioning hostname "newest-cni-926000"
	I0213 16:19:43.393716   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:19:43.462353   24510 main.go:141] libmachine: Using SSH client type: native
	I0213 16:19:43.462800   24510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 57621 <nil> <nil>}
	I0213 16:19:43.462816   24510 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-926000 && echo "newest-cni-926000" | sudo tee /etc/hostname
	I0213 16:19:43.464106   24510 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0213 16:19:46.624239   24510 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-926000
	
	I0213 16:19:46.624323   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:19:46.675925   24510 main.go:141] libmachine: Using SSH client type: native
	I0213 16:19:46.676213   24510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 57621 <nil> <nil>}
	I0213 16:19:46.676226   24510 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-926000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-926000/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-926000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 16:19:46.816146   24510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 16:19:46.816164   24510 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/18169-6320/.minikube CaCertPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18169-6320/.minikube}
	I0213 16:19:46.816187   24510 ubuntu.go:177] setting up certificates
	I0213 16:19:46.816193   24510 provision.go:83] configureAuth start
	I0213 16:19:46.816264   24510 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-926000
	I0213 16:19:46.866343   24510 provision.go:138] copyHostCerts
	I0213 16:19:46.866452   24510 exec_runner.go:144] found /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.pem, removing ...
	I0213 16:19:46.866467   24510 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.pem
	I0213 16:19:46.866613   24510 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.pem (1078 bytes)
	I0213 16:19:46.866885   24510 exec_runner.go:144] found /Users/jenkins/minikube-integration/18169-6320/.minikube/cert.pem, removing ...
	I0213 16:19:46.866891   24510 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18169-6320/.minikube/cert.pem
	I0213 16:19:46.866964   24510 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18169-6320/.minikube/cert.pem (1123 bytes)
	I0213 16:19:46.867149   24510 exec_runner.go:144] found /Users/jenkins/minikube-integration/18169-6320/.minikube/key.pem, removing ...
	I0213 16:19:46.867155   24510 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18169-6320/.minikube/key.pem
	I0213 16:19:46.867229   24510 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18169-6320/.minikube/key.pem (1675 bytes)
	I0213 16:19:46.867370   24510 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca-key.pem org=jenkins.newest-cni-926000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-926000]
	I0213 16:19:47.008225   24510 provision.go:172] copyRemoteCerts
	I0213 16:19:47.008290   24510 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 16:19:47.008345   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:19:47.061564   24510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57621 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/newest-cni-926000/id_rsa Username:docker}
	I0213 16:19:47.163779   24510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 16:19:47.203341   24510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0213 16:19:47.250567   24510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0213 16:19:47.296212   24510 provision.go:86] duration metric: configureAuth took 479.996893ms
	I0213 16:19:47.296226   24510 ubuntu.go:193] setting minikube options for container-runtime
	I0213 16:19:47.296372   24510 config.go:182] Loaded profile config "newest-cni-926000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0213 16:19:47.296432   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:19:47.348718   24510 main.go:141] libmachine: Using SSH client type: native
	I0213 16:19:47.349034   24510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 57621 <nil> <nil>}
	I0213 16:19:47.349044   24510 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0213 16:19:47.488012   24510 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0213 16:19:47.488025   24510 ubuntu.go:71] root file system type: overlay
	I0213 16:19:47.488109   24510 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0213 16:19:47.488190   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:19:47.538865   24510 main.go:141] libmachine: Using SSH client type: native
	I0213 16:19:47.539163   24510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 57621 <nil> <nil>}
	I0213 16:19:47.539209   24510 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0213 16:19:47.702822   24510 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0213 16:19:47.702914   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:19:47.753894   24510 main.go:141] libmachine: Using SSH client type: native
	I0213 16:19:47.754191   24510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil>  [] 0s} 127.0.0.1 57621 <nil> <nil>}
	I0213 16:19:47.754204   24510 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0213 16:19:47.907955   24510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 16:19:47.907974   24510 machine.go:91] provisioned docker machine in 4.514315212s
	I0213 16:19:47.907985   24510 start.go:300] post-start starting for "newest-cni-926000" (driver="docker")
	I0213 16:19:47.907994   24510 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 16:19:47.908073   24510 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 16:19:47.908128   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:19:47.961784   24510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57621 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/newest-cni-926000/id_rsa Username:docker}
	I0213 16:19:48.065345   24510 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 16:19:48.069523   24510 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0213 16:19:48.069549   24510 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0213 16:19:48.069557   24510 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0213 16:19:48.069562   24510 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0213 16:19:48.069571   24510 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18169-6320/.minikube/addons for local assets ...
	I0213 16:19:48.069674   24510 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18169-6320/.minikube/files for local assets ...
	I0213 16:19:48.069860   24510 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem -> 67762.pem in /etc/ssl/certs
	I0213 16:19:48.070073   24510 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 16:19:48.084929   24510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem --> /etc/ssl/certs/67762.pem (1708 bytes)
	I0213 16:19:48.125054   24510 start.go:303] post-start completed in 217.057155ms
	I0213 16:19:48.125128   24510 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0213 16:19:48.125198   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:19:48.176829   24510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57621 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/newest-cni-926000/id_rsa Username:docker}
	I0213 16:19:48.273141   24510 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0213 16:19:48.278132   24510 fix.go:56] fixHost completed within 5.389813035s
	I0213 16:19:48.278151   24510 start.go:83] releasing machines lock for "newest-cni-926000", held for 5.389862964s
	I0213 16:19:48.278254   24510 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-926000
	I0213 16:19:48.329860   24510 ssh_runner.go:195] Run: cat /version.json
	I0213 16:19:48.329876   24510 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 16:19:48.329924   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:19:48.329950   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:19:48.388448   24510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57621 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/newest-cni-926000/id_rsa Username:docker}
	I0213 16:19:48.388461   24510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57621 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/newest-cni-926000/id_rsa Username:docker}
	I0213 16:19:48.482525   24510 ssh_runner.go:195] Run: systemctl --version
	I0213 16:19:48.588079   24510 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0213 16:19:48.594236   24510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0213 16:19:48.623913   24510 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0213 16:19:48.623987   24510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 16:19:48.638879   24510 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0213 16:19:48.638896   24510 start.go:475] detecting cgroup driver to use...
	I0213 16:19:48.638909   24510 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 16:19:48.639020   24510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 16:19:48.666388   24510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0213 16:19:48.682374   24510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0213 16:19:48.699460   24510 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0213 16:19:48.699552   24510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0213 16:19:48.718052   24510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 16:19:48.736395   24510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0213 16:19:48.756354   24510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 16:19:48.775467   24510 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 16:19:48.791700   24510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0213 16:19:48.808244   24510 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 16:19:48.822964   24510 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 16:19:48.837975   24510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 16:19:48.898612   24510 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0213 16:19:48.986323   24510 start.go:475] detecting cgroup driver to use...
	I0213 16:19:48.986369   24510 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 16:19:48.986475   24510 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0213 16:19:49.007344   24510 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0213 16:19:49.007433   24510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0213 16:19:49.027656   24510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 16:19:49.062324   24510 ssh_runner.go:195] Run: which cri-dockerd
	I0213 16:19:49.066801   24510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0213 16:19:49.081972   24510 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0213 16:19:49.119201   24510 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0213 16:19:49.227464   24510 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0213 16:19:49.322450   24510 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0213 16:19:49.322599   24510 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0213 16:19:49.353345   24510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 16:19:49.414159   24510 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 16:19:49.690128   24510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0213 16:19:49.707371   24510 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0213 16:19:49.725782   24510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0213 16:19:49.742976   24510 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0213 16:19:49.804055   24510 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0213 16:19:49.865553   24510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 16:19:49.929568   24510 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0213 16:19:49.962166   24510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0213 16:19:49.979957   24510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 16:19:50.043245   24510 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0213 16:19:50.135417   24510 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0213 16:19:50.135522   24510 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0213 16:19:50.140345   24510 start.go:543] Will wait 60s for crictl version
	I0213 16:19:50.140399   24510 ssh_runner.go:195] Run: which crictl
	I0213 16:19:50.144771   24510 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 16:19:50.196726   24510 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0213 16:19:50.196815   24510 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 16:19:50.220756   24510 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 16:19:50.269553   24510 out.go:204] * Preparing Kubernetes v1.29.0-rc.2 on Docker 24.0.7 ...
	I0213 16:19:50.269670   24510 cli_runner.go:164] Run: docker exec -t newest-cni-926000 dig +short host.docker.internal
	I0213 16:19:50.390504   24510 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0213 16:19:50.390615   24510 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0213 16:19:50.395268   24510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 16:19:50.412276   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:19:50.485933   24510 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0213 16:19:50.508802   24510 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0213 16:19:50.508959   24510 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 16:19:50.531179   24510 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0213 16:19:50.531203   24510 docker.go:615] Images already preloaded, skipping extraction
	I0213 16:19:50.531304   24510 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 16:19:50.550958   24510 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0213 16:19:50.550990   24510 cache_images.go:84] Images are preloaded, skipping loading
	I0213 16:19:50.551075   24510 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0213 16:19:50.600500   24510 cni.go:84] Creating CNI manager for ""
	I0213 16:19:50.600518   24510 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 16:19:50.600531   24510 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0213 16:19:50.600547   24510 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-926000 NodeName:newest-cni-926000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs
:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 16:19:50.600662   24510 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-926000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 16:19:50.600725   24510 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-926000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-926000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 16:19:50.600786   24510 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0213 16:19:50.616147   24510 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 16:19:50.616292   24510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 16:19:50.632032   24510 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (420 bytes)
	I0213 16:19:50.660886   24510 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0213 16:19:50.689973   24510 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I0213 16:19:50.719143   24510 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0213 16:19:50.724406   24510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 16:19:50.741246   24510 certs.go:56] Setting up /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/newest-cni-926000 for IP: 192.168.76.2
	I0213 16:19:50.741270   24510 certs.go:190] acquiring lock for shared ca certs: {Name:mkc037f48c69539d66bd92ede4890b05c28518b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 16:19:50.741453   24510 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.key
	I0213 16:19:50.741529   24510 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18169-6320/.minikube/proxy-client-ca.key
	I0213 16:19:50.741649   24510 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/newest-cni-926000/client.key
	I0213 16:19:50.741794   24510 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/newest-cni-926000/apiserver.key.31bdca25
	I0213 16:19:50.741862   24510 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/newest-cni-926000/proxy-client.key
	I0213 16:19:50.742076   24510 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/6776.pem (1338 bytes)
	W0213 16:19:50.742131   24510 certs.go:433] ignoring /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/6776_empty.pem, impossibly tiny 0 bytes
	I0213 16:19:50.742141   24510 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 16:19:50.742177   24510 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem (1078 bytes)
	I0213 16:19:50.742224   24510 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem (1123 bytes)
	I0213 16:19:50.742253   24510 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/key.pem (1675 bytes)
	I0213 16:19:50.742353   24510 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem (1708 bytes)
	I0213 16:19:50.742982   24510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/newest-cni-926000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 16:19:50.783224   24510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/newest-cni-926000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 16:19:50.824438   24510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/newest-cni-926000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 16:19:50.866034   24510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/newest-cni-926000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0213 16:19:50.906244   24510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 16:19:50.946991   24510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0213 16:19:50.990609   24510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 16:19:51.038144   24510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 16:19:51.082361   24510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 16:19:51.123844   24510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/6776.pem --> /usr/share/ca-certificates/6776.pem (1338 bytes)
	I0213 16:19:51.165931   24510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem --> /usr/share/ca-certificates/67762.pem (1708 bytes)
	I0213 16:19:51.206940   24510 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 16:19:51.235805   24510 ssh_runner.go:195] Run: openssl version
	I0213 16:19:51.242558   24510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 16:19:51.260000   24510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 16:19:51.264427   24510 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 22:54 /usr/share/ca-certificates/minikubeCA.pem
	I0213 16:19:51.264475   24510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 16:19:51.271473   24510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 16:19:51.286816   24510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6776.pem && ln -fs /usr/share/ca-certificates/6776.pem /etc/ssl/certs/6776.pem"
	I0213 16:19:51.302864   24510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6776.pem
	I0213 16:19:51.307868   24510 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 23:02 /usr/share/ca-certificates/6776.pem
	I0213 16:19:51.307917   24510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6776.pem
	I0213 16:19:51.314430   24510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6776.pem /etc/ssl/certs/51391683.0"
	I0213 16:19:51.331592   24510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67762.pem && ln -fs /usr/share/ca-certificates/67762.pem /etc/ssl/certs/67762.pem"
	I0213 16:19:51.348154   24510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67762.pem
	I0213 16:19:51.352840   24510 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 23:02 /usr/share/ca-certificates/67762.pem
	I0213 16:19:51.352944   24510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67762.pem
	I0213 16:19:51.359906   24510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67762.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 16:19:51.375225   24510 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 16:19:51.379494   24510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 16:19:51.386185   24510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 16:19:51.393500   24510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 16:19:51.400097   24510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 16:19:51.406727   24510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 16:19:51.413430   24510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 16:19:51.421045   24510 kubeadm.go:404] StartCluster: {Name:newest-cni-926000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-926000 Namespace:default APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: S
ubnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 16:19:51.421243   24510 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 16:19:51.438776   24510 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 16:19:51.453979   24510 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 16:19:51.453998   24510 kubeadm.go:636] restartCluster start
	I0213 16:19:51.454047   24510 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 16:19:51.468658   24510 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:51.468753   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:19:51.522714   24510 kubeconfig.go:135] verify returned: extract IP: "newest-cni-926000" does not appear in /Users/jenkins/minikube-integration/18169-6320/kubeconfig
	I0213 16:19:51.522874   24510 kubeconfig.go:146] "newest-cni-926000" context is missing from /Users/jenkins/minikube-integration/18169-6320/kubeconfig - will repair!
	I0213 16:19:51.523187   24510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/kubeconfig: {Name:mk44cd4b9e88d1002bf6fa3af05bfaa649323b25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 16:19:51.524709   24510 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 16:19:51.540155   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:51.540229   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:51.556167   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:52.042231   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:52.042403   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:52.060746   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:52.542106   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:52.542207   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:52.558909   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:53.040293   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:53.040407   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:53.058046   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:53.542310   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:53.542551   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:53.561136   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:54.041063   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:54.041169   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:54.058068   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:54.542205   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:54.542364   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:54.560358   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:55.042270   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:55.042399   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:55.059308   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:55.540537   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:55.540660   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:55.558972   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:56.041064   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:56.041245   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:56.058869   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:56.541651   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:56.541730   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:56.558359   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:57.041992   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:57.042106   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:57.058895   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:57.541051   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:57.541177   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:57.559008   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:58.042385   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:58.042608   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:58.060986   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:58.540346   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:58.540499   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:58.557091   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:59.042375   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:59.042523   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:59.060050   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:19:59.540970   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:19:59.541138   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:19:59.559337   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:20:00.041076   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:20:00.041150   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:20:00.058366   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:20:00.542489   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:20:00.542596   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:20:00.560944   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:20:01.040355   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:20:01.040442   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:20:01.058108   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:20:01.541001   24510 api_server.go:166] Checking apiserver status ...
	I0213 16:20:01.541104   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 16:20:01.558254   24510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:20:01.558270   24510 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0213 16:20:01.558286   24510 kubeadm.go:1135] stopping kube-system containers ...
	I0213 16:20:01.558358   24510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 16:20:01.578330   24510 docker.go:483] Stopping containers: [8691fd9793a6 491cf802856a 74f4e5c38ae8 d5c43f78eec6 b093aa5e412a 923ae21fcad8 661093fbacd4 8af7911a4da6 64196263fa8d 30ffc84347a4 5c78b52a5638 5b649ba8dbcb d7244c640093 b2219dfe6aa1 c18ee34fd84e]
	I0213 16:20:01.578423   24510 ssh_runner.go:195] Run: docker stop 8691fd9793a6 491cf802856a 74f4e5c38ae8 d5c43f78eec6 b093aa5e412a 923ae21fcad8 661093fbacd4 8af7911a4da6 64196263fa8d 30ffc84347a4 5c78b52a5638 5b649ba8dbcb d7244c640093 b2219dfe6aa1 c18ee34fd84e
	I0213 16:20:01.599527   24510 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 16:20:01.617540   24510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 16:20:01.632316   24510 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5651 Feb 14 00:19 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Feb 14 00:19 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Feb 14 00:19 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Feb 14 00:19 /etc/kubernetes/scheduler.conf
	
	I0213 16:20:01.632383   24510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0213 16:20:01.647825   24510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0213 16:20:01.662932   24510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0213 16:20:01.678009   24510 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:20:01.678079   24510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0213 16:20:01.693163   24510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0213 16:20:01.708125   24510 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0213 16:20:01.708184   24510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0213 16:20:01.723608   24510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 16:20:01.740672   24510 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 16:20:01.740695   24510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 16:20:01.805610   24510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 16:20:02.704983   24510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 16:20:02.838689   24510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 16:20:02.898002   24510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 16:20:03.004357   24510 api_server.go:52] waiting for apiserver process to appear ...
	I0213 16:20:03.004438   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:20:03.504583   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:20:04.004844   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:20:04.029901   24510 api_server.go:72] duration metric: took 1.02552605s to wait for apiserver process to appear ...
	I0213 16:20:04.029940   24510 api_server.go:88] waiting for apiserver healthz status ...
	I0213 16:20:04.029973   24510 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57620/healthz ...
	I0213 16:20:06.793285   24510 api_server.go:279] https://127.0.0.1:57620/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 16:20:06.793325   24510 api_server.go:103] status: https://127.0.0.1:57620/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 16:20:06.793339   24510 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57620/healthz ...
	I0213 16:20:06.802722   24510 api_server.go:279] https://127.0.0.1:57620/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 16:20:06.802771   24510 api_server.go:103] status: https://127.0.0.1:57620/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 16:20:07.030256   24510 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57620/healthz ...
	I0213 16:20:07.036814   24510 api_server.go:279] https://127.0.0.1:57620/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 16:20:07.036835   24510 api_server.go:103] status: https://127.0.0.1:57620/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 16:20:07.530171   24510 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57620/healthz ...
	I0213 16:20:07.538082   24510 api_server.go:279] https://127.0.0.1:57620/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 16:20:07.538101   24510 api_server.go:103] status: https://127.0.0.1:57620/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 16:20:08.030689   24510 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57620/healthz ...
	I0213 16:20:08.039238   24510 api_server.go:279] https://127.0.0.1:57620/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 16:20:08.039266   24510 api_server.go:103] status: https://127.0.0.1:57620/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 16:20:08.530120   24510 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57620/healthz ...
	I0213 16:20:08.535607   24510 api_server.go:279] https://127.0.0.1:57620/healthz returned 200:
	ok
	I0213 16:20:08.542053   24510 api_server.go:141] control plane version: v1.29.0-rc.2
	I0213 16:20:08.542067   24510 api_server.go:131] duration metric: took 4.512061537s to wait for apiserver health ...
	I0213 16:20:08.542075   24510 cni.go:84] Creating CNI manager for ""
	I0213 16:20:08.542085   24510 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 16:20:08.565586   24510 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 16:20:08.586490   24510 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 16:20:08.603362   24510 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 16:20:08.631771   24510 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 16:20:08.640763   24510 system_pods.go:59] 8 kube-system pods found
	I0213 16:20:08.640782   24510 system_pods.go:61] "coredns-76f75df574-n2fjt" [75e4b513-27ac-494f-837c-acc037c73f30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0213 16:20:08.640791   24510 system_pods.go:61] "etcd-newest-cni-926000" [8518fdf2-78dc-46bc-a387-aaff7faeae4c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 16:20:08.640796   24510 system_pods.go:61] "kube-apiserver-newest-cni-926000" [2b885b86-acc6-4ee3-be4b-6634cc674662] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 16:20:08.640801   24510 system_pods.go:61] "kube-controller-manager-newest-cni-926000" [27c92979-f6ad-41e4-b688-3f9f6c0baafb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 16:20:08.640807   24510 system_pods.go:61] "kube-proxy-rr5df" [3e0247bc-bafb-4bc8-831f-4dd3c74b1c4a] Running
	I0213 16:20:08.640812   24510 system_pods.go:61] "kube-scheduler-newest-cni-926000" [99f79f45-8da6-4609-8841-5fdf7c3f7aa0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 16:20:08.640817   24510 system_pods.go:61] "metrics-server-57f55c9bc5-jkgzd" [d1ab78c1-b022-4af9-bff7-62efcdf57c44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 16:20:08.640821   24510 system_pods.go:61] "storage-provisioner" [cc992d3f-3e9e-40e7-beef-1ffb833a2acd] Running
	I0213 16:20:08.640826   24510 system_pods.go:74] duration metric: took 9.044567ms to wait for pod list to return data ...
	I0213 16:20:08.640834   24510 node_conditions.go:102] verifying NodePressure condition ...
	I0213 16:20:08.644201   24510 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0213 16:20:08.644215   24510 node_conditions.go:123] node cpu capacity is 12
	I0213 16:20:08.644225   24510 node_conditions.go:105] duration metric: took 3.38773ms to run NodePressure ...
	I0213 16:20:08.644236   24510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 16:20:08.903641   24510 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 16:20:08.912143   24510 ops.go:34] apiserver oom_adj: -16
	I0213 16:20:08.912158   24510 kubeadm.go:640] restartCluster took 17.45793868s
	I0213 16:20:08.912165   24510 kubeadm.go:406] StartCluster complete in 17.490912738s
	I0213 16:20:08.912177   24510 settings.go:142] acquiring lock: {Name:mk73e2877e5f833d3067188c2d2115030ace2af4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 16:20:08.912250   24510 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18169-6320/kubeconfig
	I0213 16:20:08.912896   24510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/kubeconfig: {Name:mk44cd4b9e88d1002bf6fa3af05bfaa649323b25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 16:20:08.913174   24510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 16:20:08.913224   24510 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 16:20:08.913264   24510 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-926000"
	I0213 16:20:08.913278   24510 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-926000"
	I0213 16:20:08.913278   24510 addons.go:69] Setting dashboard=true in profile "newest-cni-926000"
	W0213 16:20:08.913283   24510 addons.go:243] addon storage-provisioner should already be in state true
	I0213 16:20:08.913290   24510 addons.go:234] Setting addon dashboard=true in "newest-cni-926000"
	I0213 16:20:08.913285   24510 addons.go:69] Setting default-storageclass=true in profile "newest-cni-926000"
	W0213 16:20:08.913294   24510 addons.go:243] addon dashboard should already be in state true
	I0213 16:20:08.913304   24510 config.go:182] Loaded profile config "newest-cni-926000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0213 16:20:08.913309   24510 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-926000"
	I0213 16:20:08.913317   24510 host.go:66] Checking if "newest-cni-926000" exists ...
	I0213 16:20:08.913316   24510 host.go:66] Checking if "newest-cni-926000" exists ...
	I0213 16:20:08.913351   24510 addons.go:69] Setting metrics-server=true in profile "newest-cni-926000"
	I0213 16:20:08.913378   24510 addons.go:234] Setting addon metrics-server=true in "newest-cni-926000"
	W0213 16:20:08.913389   24510 addons.go:243] addon metrics-server should already be in state true
	I0213 16:20:08.913432   24510 host.go:66] Checking if "newest-cni-926000" exists ...
	I0213 16:20:08.913590   24510 cli_runner.go:164] Run: docker container inspect newest-cni-926000 --format={{.State.Status}}
	I0213 16:20:08.913659   24510 cli_runner.go:164] Run: docker container inspect newest-cni-926000 --format={{.State.Status}}
	I0213 16:20:08.913788   24510 cli_runner.go:164] Run: docker container inspect newest-cni-926000 --format={{.State.Status}}
	I0213 16:20:08.914566   24510 cli_runner.go:164] Run: docker container inspect newest-cni-926000 --format={{.State.Status}}
	I0213 16:20:08.923914   24510 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-926000" context rescaled to 1 replicas
	I0213 16:20:08.923971   24510 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 16:20:08.945632   24510 out.go:177] * Verifying Kubernetes components...
	I0213 16:20:08.989545   24510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 16:20:09.023186   24510 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0213 16:20:09.003136   24510 addons.go:234] Setting addon default-storageclass=true in "newest-cni-926000"
	I0213 16:20:09.017656   24510 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0213 16:20:09.017680   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:20:09.044364   24510 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 16:20:09.044372   24510 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	W0213 16:20:09.044373   24510 addons.go:243] addon default-storageclass should already be in state true
	I0213 16:20:09.044395   24510 host.go:66] Checking if "newest-cni-926000" exists ...
	I0213 16:20:09.044405   24510 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0213 16:20:09.065679   24510 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 16:20:09.066057   24510 cli_runner.go:164] Run: docker container inspect newest-cni-926000 --format={{.State.Status}}
	I0213 16:20:09.086196   24510 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0213 16:20:09.086215   24510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0213 16:20:09.086268   24510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 16:20:09.107412   24510 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0213 16:20:09.107436   24510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0213 16:20:09.107479   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:20:09.107509   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:20:09.107533   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:20:09.116854   24510 api_server.go:52] waiting for apiserver process to appear ...
	I0213 16:20:09.117096   24510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 16:20:09.144236   24510 api_server.go:72] duration metric: took 220.214345ms to wait for apiserver process to appear ...
	I0213 16:20:09.144256   24510 api_server.go:88] waiting for apiserver healthz status ...
	I0213 16:20:09.144284   24510 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57620/healthz ...
	I0213 16:20:09.153688   24510 api_server.go:279] https://127.0.0.1:57620/healthz returned 200:
	ok
	I0213 16:20:09.155546   24510 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 16:20:09.155565   24510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 16:20:09.155673   24510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-926000
	I0213 16:20:09.156838   24510 api_server.go:141] control plane version: v1.29.0-rc.2
	I0213 16:20:09.156918   24510 api_server.go:131] duration metric: took 12.651851ms to wait for apiserver health ...
	I0213 16:20:09.156940   24510 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 16:20:09.165285   24510 system_pods.go:59] 8 kube-system pods found
	I0213 16:20:09.165314   24510 system_pods.go:61] "coredns-76f75df574-n2fjt" [75e4b513-27ac-494f-837c-acc037c73f30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0213 16:20:09.165326   24510 system_pods.go:61] "etcd-newest-cni-926000" [8518fdf2-78dc-46bc-a387-aaff7faeae4c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 16:20:09.165341   24510 system_pods.go:61] "kube-apiserver-newest-cni-926000" [2b885b86-acc6-4ee3-be4b-6634cc674662] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 16:20:09.165355   24510 system_pods.go:61] "kube-controller-manager-newest-cni-926000" [27c92979-f6ad-41e4-b688-3f9f6c0baafb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 16:20:09.165362   24510 system_pods.go:61] "kube-proxy-rr5df" [3e0247bc-bafb-4bc8-831f-4dd3c74b1c4a] Running
	I0213 16:20:09.165371   24510 system_pods.go:61] "kube-scheduler-newest-cni-926000" [99f79f45-8da6-4609-8841-5fdf7c3f7aa0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 16:20:09.165378   24510 system_pods.go:61] "metrics-server-57f55c9bc5-jkgzd" [d1ab78c1-b022-4af9-bff7-62efcdf57c44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 16:20:09.165384   24510 system_pods.go:61] "storage-provisioner" [cc992d3f-3e9e-40e7-beef-1ffb833a2acd] Running
	I0213 16:20:09.165392   24510 system_pods.go:74] duration metric: took 8.440107ms to wait for pod list to return data ...
	I0213 16:20:09.165399   24510 default_sa.go:34] waiting for default service account to be created ...
	I0213 16:20:09.169659   24510 default_sa.go:45] found service account: "default"
	I0213 16:20:09.169676   24510 default_sa.go:55] duration metric: took 4.27117ms for default service account to be created ...
	I0213 16:20:09.169684   24510 kubeadm.go:581] duration metric: took 245.66785ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0213 16:20:09.169697   24510 node_conditions.go:102] verifying NodePressure condition ...
	I0213 16:20:09.175089   24510 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0213 16:20:09.175121   24510 node_conditions.go:123] node cpu capacity is 12
	I0213 16:20:09.175139   24510 node_conditions.go:105] duration metric: took 5.436985ms to run NodePressure ...
	I0213 16:20:09.175159   24510 start.go:228] waiting for startup goroutines ...
	I0213 16:20:09.182298   24510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57621 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/newest-cni-926000/id_rsa Username:docker}
	I0213 16:20:09.182291   24510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57621 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/newest-cni-926000/id_rsa Username:docker}
	I0213 16:20:09.183549   24510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57621 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/newest-cni-926000/id_rsa Username:docker}
	I0213 16:20:09.225316   24510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57621 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/newest-cni-926000/id_rsa Username:docker}
	I0213 16:20:09.307302   24510 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0213 16:20:09.307301   24510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 16:20:09.307313   24510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0213 16:20:09.307684   24510 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0213 16:20:09.307695   24510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0213 16:20:09.344399   24510 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0213 16:20:09.344428   24510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0213 16:20:09.345040   24510 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0213 16:20:09.345052   24510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0213 16:20:09.352436   24510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 16:20:09.412807   24510 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 16:20:09.412826   24510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0213 16:20:09.414924   24510 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0213 16:20:09.414933   24510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0213 16:20:09.498056   24510 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0213 16:20:09.498069   24510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0213 16:20:09.498841   24510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 16:20:09.602210   24510 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0213 16:20:09.602229   24510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0213 16:20:09.700772   24510 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0213 16:20:09.700789   24510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0213 16:20:09.738703   24510 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0213 16:20:09.738744   24510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0213 16:20:09.824970   24510 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0213 16:20:09.824990   24510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0213 16:20:09.908251   24510 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0213 16:20:09.908268   24510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0213 16:20:09.943790   24510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0213 16:20:10.398321   24510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.090981133s)
	I0213 16:20:10.398353   24510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.045879571s)
	I0213 16:20:10.552796   24510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.053883843s)
	I0213 16:20:10.552836   24510 addons.go:470] Verifying addon metrics-server=true in "newest-cni-926000"
	I0213 16:20:10.818199   24510 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-926000 addons enable metrics-server
	
	I0213 16:20:10.876387   24510 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0213 16:20:10.935379   24510 addons.go:505] enable addons completed in 2.022140088s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0213 16:20:10.935417   24510 start.go:233] waiting for cluster config update ...
	I0213 16:20:10.935436   24510 start.go:242] writing updated cluster config ...
	I0213 16:20:10.935924   24510 ssh_runner.go:195] Run: rm -f paused
	I0213 16:20:10.981894   24510 start.go:600] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0213 16:20:11.003297   24510 out.go:177] * Done! kubectl is now configured to use "newest-cni-926000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 14 00:03:02 old-k8s-version-745000 dockerd[715]: time="2024-02-14T00:03:02.561669846Z" level=info msg="Loading containers: start."
	Feb 14 00:03:02 old-k8s-version-745000 dockerd[715]: time="2024-02-14T00:03:02.650854264Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 14 00:03:02 old-k8s-version-745000 dockerd[715]: time="2024-02-14T00:03:02.688287423Z" level=info msg="Loading containers: done."
	Feb 14 00:03:02 old-k8s-version-745000 dockerd[715]: time="2024-02-14T00:03:02.697470883Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 14 00:03:02 old-k8s-version-745000 dockerd[715]: time="2024-02-14T00:03:02.697576732Z" level=info msg="Daemon has completed initialization"
	Feb 14 00:03:02 old-k8s-version-745000 dockerd[715]: time="2024-02-14T00:03:02.720967399Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 14 00:03:02 old-k8s-version-745000 systemd[1]: Started Docker Application Container Engine.
	Feb 14 00:03:02 old-k8s-version-745000 dockerd[715]: time="2024-02-14T00:03:02.721084588Z" level=info msg="API listen on [::]:2376"
	Feb 14 00:03:11 old-k8s-version-745000 systemd[1]: Stopping Docker Application Container Engine...
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[715]: time="2024-02-14T00:03:11.066588540Z" level=info msg="Processing signal 'terminated'"
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[715]: time="2024-02-14T00:03:11.067792543Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[715]: time="2024-02-14T00:03:11.068003985Z" level=info msg="Daemon shutdown complete"
	Feb 14 00:03:11 old-k8s-version-745000 systemd[1]: docker.service: Deactivated successfully.
	Feb 14 00:03:11 old-k8s-version-745000 systemd[1]: Stopped Docker Application Container Engine.
	Feb 14 00:03:11 old-k8s-version-745000 systemd[1]: Starting Docker Application Container Engine...
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[941]: time="2024-02-14T00:03:11.124939033Z" level=info msg="Starting up"
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[941]: time="2024-02-14T00:03:11.391957321Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[941]: time="2024-02-14T00:03:11.574713073Z" level=info msg="Loading containers: start."
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[941]: time="2024-02-14T00:03:11.666161350Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[941]: time="2024-02-14T00:03:11.703341083Z" level=info msg="Loading containers: done."
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[941]: time="2024-02-14T00:03:11.711453433Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[941]: time="2024-02-14T00:03:11.711511537Z" level=info msg="Daemon has completed initialization"
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[941]: time="2024-02-14T00:03:11.734314387Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 14 00:03:11 old-k8s-version-745000 dockerd[941]: time="2024-02-14T00:03:11.734519985Z" level=info msg="API listen on [::]:2376"
	Feb 14 00:03:11 old-k8s-version-745000 systemd[1]: Started Docker Application Container Engine.
	
	
	==> container status <==
	time="2024-02-14T00:26:54Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 00:26:54 up  1:49,  0 users,  load average: 2.35, 3.08, 4.13
	Linux old-k8s-version-745000 6.6.12-linuxkit #1 SMP PREEMPT_DYNAMIC Tue Jan 30 09:48:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kubelet <==
	Feb 14 00:26:52 old-k8s-version-745000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 14 00:26:53 old-k8s-version-745000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1343.
	Feb 14 00:26:53 old-k8s-version-745000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 14 00:26:53 old-k8s-version-745000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 14 00:26:53 old-k8s-version-745000 kubelet[39995]: I0214 00:26:53.548388   39995 server.go:410] Version: v1.16.0
	Feb 14 00:26:53 old-k8s-version-745000 kubelet[39995]: I0214 00:26:53.548617   39995 plugins.go:100] No cloud provider specified.
	Feb 14 00:26:53 old-k8s-version-745000 kubelet[39995]: I0214 00:26:53.548626   39995 server.go:773] Client rotation is on, will bootstrap in background
	Feb 14 00:26:53 old-k8s-version-745000 kubelet[39995]: I0214 00:26:53.550179   39995 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 14 00:26:53 old-k8s-version-745000 kubelet[39995]: W0214 00:26:53.550859   39995 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 14 00:26:53 old-k8s-version-745000 kubelet[39995]: W0214 00:26:53.550922   39995 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 14 00:26:53 old-k8s-version-745000 kubelet[39995]: F0214 00:26:53.550942   39995 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 14 00:26:53 old-k8s-version-745000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 14 00:26:53 old-k8s-version-745000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 14 00:26:54 old-k8s-version-745000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1344.
	Feb 14 00:26:54 old-k8s-version-745000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 14 00:26:54 old-k8s-version-745000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 14 00:26:54 old-k8s-version-745000 kubelet[40100]: I0214 00:26:54.307768   40100 server.go:410] Version: v1.16.0
	Feb 14 00:26:54 old-k8s-version-745000 kubelet[40100]: I0214 00:26:54.307975   40100 plugins.go:100] No cloud provider specified.
	Feb 14 00:26:54 old-k8s-version-745000 kubelet[40100]: I0214 00:26:54.307983   40100 server.go:773] Client rotation is on, will bootstrap in background
	Feb 14 00:26:54 old-k8s-version-745000 kubelet[40100]: I0214 00:26:54.309685   40100 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 14 00:26:54 old-k8s-version-745000 kubelet[40100]: W0214 00:26:54.331393   40100 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 14 00:26:54 old-k8s-version-745000 kubelet[40100]: W0214 00:26:54.331465   40100 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 14 00:26:54 old-k8s-version-745000 kubelet[40100]: F0214 00:26:54.331489   40100 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 14 00:26:54 old-k8s-version-745000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 14 00:26:54 old-k8s-version-745000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-745000 -n old-k8s-version-745000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-745000 -n old-k8s-version-745000: exit status 2 (467.084526ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-745000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (385.92s)

                                                
                                    

Test pass (300/333)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 21.2
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.29
9 TestDownloadOnly/v1.16.0/DeleteAll 0.66
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.37
12 TestDownloadOnly/v1.28.4/json-events 20.44
13 TestDownloadOnly/v1.28.4/preload-exists 0
16 TestDownloadOnly/v1.28.4/kubectl 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.31
18 TestDownloadOnly/v1.28.4/DeleteAll 0.64
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.38
21 TestDownloadOnly/v1.29.0-rc.2/json-events 18.5
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.29
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.65
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.37
29 TestDownloadOnlyKic 2
30 TestBinaryMirror 1.65
31 TestOffline 43.49
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.19
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.22
36 TestAddons/Setup 338.21
40 TestAddons/parallel/InspektorGadget 12.03
41 TestAddons/parallel/MetricsServer 6.83
42 TestAddons/parallel/HelmTiller 10.08
44 TestAddons/parallel/CSI 49.96
45 TestAddons/parallel/Headlamp 14.54
46 TestAddons/parallel/CloudSpanner 6.88
47 TestAddons/parallel/LocalPath 57.03
48 TestAddons/parallel/NvidiaDevicePlugin 5.67
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.1
53 TestAddons/StoppedEnableDisable 11.78
54 TestCertOptions 26.44
55 TestCertExpiration 233.11
56 TestDockerFlags 28.34
57 TestForceSystemdFlag 27.21
58 TestForceSystemdEnv 27.45
61 TestHyperKitDriverInstallOrUpdate 7.75
64 TestErrorSpam/setup 20.95
65 TestErrorSpam/start 2.06
66 TestErrorSpam/status 1.29
67 TestErrorSpam/pause 1.73
68 TestErrorSpam/unpause 1.89
69 TestErrorSpam/stop 2.82
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 75.48
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 36.58
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.07
80 TestFunctional/serial/CacheCmd/cache/add_remote 9.9
81 TestFunctional/serial/CacheCmd/cache/add_local 1.58
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
83 TestFunctional/serial/CacheCmd/cache/list 0.08
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.43
85 TestFunctional/serial/CacheCmd/cache/cache_reload 3.4
86 TestFunctional/serial/CacheCmd/cache/delete 0.16
87 TestFunctional/serial/MinikubeKubectlCmd 1.15
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.57
89 TestFunctional/serial/ExtraConfig 36.81
90 TestFunctional/serial/ComponentHealth 0.06
91 TestFunctional/serial/LogsCmd 3.15
92 TestFunctional/serial/LogsFileCmd 3.39
93 TestFunctional/serial/InvalidService 4.2
95 TestFunctional/parallel/ConfigCmd 0.54
96 TestFunctional/parallel/DashboardCmd 14.19
97 TestFunctional/parallel/DryRun 1.52
98 TestFunctional/parallel/InternationalLanguage 0.72
99 TestFunctional/parallel/StatusCmd 1.32
104 TestFunctional/parallel/AddonsCmd 0.28
105 TestFunctional/parallel/PersistentVolumeClaim 28.55
107 TestFunctional/parallel/SSHCmd 0.81
108 TestFunctional/parallel/CpCmd 2.85
109 TestFunctional/parallel/MySQL 108.95
110 TestFunctional/parallel/FileSync 0.43
111 TestFunctional/parallel/CertSync 2.47
115 TestFunctional/parallel/NodeLabels 0.05
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.4
119 TestFunctional/parallel/License 1.39
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.6
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.17
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
131 TestFunctional/parallel/ServiceCmd/DeployApp 8.13
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.53
133 TestFunctional/parallel/ProfileCmd/profile_list 0.5
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.5
135 TestFunctional/parallel/MountCmd/any-port 11.52
136 TestFunctional/parallel/ServiceCmd/List 0.63
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.63
138 TestFunctional/parallel/ServiceCmd/HTTPS 15
139 TestFunctional/parallel/MountCmd/specific-port 2.24
140 TestFunctional/parallel/MountCmd/VerifyCleanup 2.61
141 TestFunctional/parallel/ServiceCmd/Format 15
142 TestFunctional/parallel/ServiceCmd/URL 15
143 TestFunctional/parallel/Version/short 0.16
144 TestFunctional/parallel/Version/components 1
145 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
148 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
149 TestFunctional/parallel/ImageCommands/ImageBuild 5.55
150 TestFunctional/parallel/ImageCommands/Setup 5.62
151 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.58
152 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.37
153 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 8.97
154 TestFunctional/parallel/DockerEnv/bash 1.76
155 TestFunctional/parallel/UpdateContextCmd/no_changes 0.33
156 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.29
157 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.3
158 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.55
159 TestFunctional/parallel/ImageCommands/ImageRemove 0.91
160 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.52
161 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.56
162 TestFunctional/delete_addon-resizer_images 0.13
163 TestFunctional/delete_my-image_image 0.05
164 TestFunctional/delete_minikube_cached_images 0.05
168 TestImageBuild/serial/Setup 21.78
169 TestImageBuild/serial/NormalBuild 4.33
170 TestImageBuild/serial/BuildWithBuildArg 1.22
171 TestImageBuild/serial/BuildWithDockerIgnore 1.05
172 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.03
182 TestJSONOutput/start/Command 37.15
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.57
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.6
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 10.79
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.8
207 TestKicCustomNetwork/create_custom_network 24.13
208 TestKicCustomNetwork/use_default_bridge_network 24.49
209 TestKicExistingNetwork 24.51
210 TestKicCustomSubnet 23.9
211 TestKicStaticIP 24.78
212 TestMainNoArgs 0.08
213 TestMinikubeProfile 51.87
216 TestMountStart/serial/StartWithMountFirst 7.86
217 TestMountStart/serial/VerifyMountFirst 0.38
218 TestMountStart/serial/StartWithMountSecond 7.97
219 TestMountStart/serial/VerifyMountSecond 0.39
220 TestMountStart/serial/DeleteFirst 2.07
221 TestMountStart/serial/VerifyMountPostDelete 0.38
222 TestMountStart/serial/Stop 1.55
223 TestMountStart/serial/RestartStopped 8.88
224 TestMountStart/serial/VerifyMountPostStop 0.39
227 TestMultiNode/serial/FreshStart2Nodes 64.34
228 TestMultiNode/serial/DeployApp2Nodes 47.14
229 TestMultiNode/serial/PingHostFrom2Pods 0.94
230 TestMultiNode/serial/AddNode 16.43
231 TestMultiNode/serial/MultiNodeLabels 0.06
232 TestMultiNode/serial/ProfileList 0.49
233 TestMultiNode/serial/CopyFile 14.65
234 TestMultiNode/serial/StopNode 3
235 TestMultiNode/serial/StartAfterStop 14.29
236 TestMultiNode/serial/RestartKeepsNodes 121.58
237 TestMultiNode/serial/DeleteNode 5.99
238 TestMultiNode/serial/StopMultiNode 21.81
239 TestMultiNode/serial/RestartMultiNode 82.5
240 TestMultiNode/serial/ValidateNameConflict 26.32
244 TestPreload 175.37
246 TestScheduledStopUnix 96.13
249 TestInsufficientStorage 10.56
250 TestRunningBinaryUpgrade 183.33
253 TestMissingContainerUpgrade 104.31
265 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 20.01
266 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 22.92
267 TestStoppedBinaryUpgrade/Setup 4.61
268 TestStoppedBinaryUpgrade/Upgrade 73.61
269 TestStoppedBinaryUpgrade/MinikubeLogs 2.5
271 TestPause/serial/Start 36.73
272 TestPause/serial/SecondStartNoReconfiguration 34.37
273 TestPause/serial/Pause 0.69
274 TestPause/serial/VerifyStatus 0.42
275 TestPause/serial/Unpause 0.68
276 TestPause/serial/PauseAgain 0.73
277 TestPause/serial/DeletePaused 2.49
278 TestPause/serial/VerifyDeletedResources 0.38
287 TestNoKubernetes/serial/StartNoK8sWithVersion 0.56
288 TestNoKubernetes/serial/StartWithK8s 26.69
289 TestNoKubernetes/serial/StartWithStopK8s 9.39
290 TestNoKubernetes/serial/Start 7.12
291 TestNoKubernetes/serial/VerifyK8sNotRunning 0.37
292 TestNoKubernetes/serial/ProfileList 1.44
293 TestNoKubernetes/serial/Stop 1.58
294 TestNoKubernetes/serial/StartNoArgs 8.09
295 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.39
296 TestNetworkPlugins/group/auto/Start 40.42
297 TestNetworkPlugins/group/kindnet/Start 51.56
298 TestNetworkPlugins/group/auto/KubeletFlags 0.42
299 TestNetworkPlugins/group/auto/NetCatPod 15.21
300 TestNetworkPlugins/group/auto/DNS 0.15
301 TestNetworkPlugins/group/auto/Localhost 0.14
302 TestNetworkPlugins/group/auto/HairPin 0.13
303 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
304 TestNetworkPlugins/group/kindnet/KubeletFlags 0.45
305 TestNetworkPlugins/group/kindnet/NetCatPod 12.25
306 TestNetworkPlugins/group/calico/Start 66.63
307 TestNetworkPlugins/group/kindnet/DNS 0.18
308 TestNetworkPlugins/group/kindnet/Localhost 0.13
309 TestNetworkPlugins/group/kindnet/HairPin 0.12
310 TestNetworkPlugins/group/custom-flannel/Start 54.82
311 TestNetworkPlugins/group/calico/ControllerPod 6.01
312 TestNetworkPlugins/group/calico/KubeletFlags 0.48
313 TestNetworkPlugins/group/calico/NetCatPod 14.22
314 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.4
315 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.2
316 TestNetworkPlugins/group/calico/DNS 0.14
317 TestNetworkPlugins/group/calico/Localhost 0.12
318 TestNetworkPlugins/group/calico/HairPin 0.12
319 TestNetworkPlugins/group/custom-flannel/DNS 0.15
320 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
321 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
322 TestNetworkPlugins/group/false/Start 77.86
323 TestNetworkPlugins/group/enable-default-cni/Start 39.29
324 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
325 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.2
326 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
327 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
328 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
329 TestNetworkPlugins/group/false/KubeletFlags 0.43
330 TestNetworkPlugins/group/false/NetCatPod 97.23
331 TestNetworkPlugins/group/flannel/Start 49.84
332 TestNetworkPlugins/group/flannel/ControllerPod 6.01
333 TestNetworkPlugins/group/flannel/KubeletFlags 0.39
334 TestNetworkPlugins/group/flannel/NetCatPod 14.19
335 TestNetworkPlugins/group/flannel/DNS 0.13
336 TestNetworkPlugins/group/flannel/Localhost 0.12
337 TestNetworkPlugins/group/flannel/HairPin 0.12
338 TestNetworkPlugins/group/false/DNS 0.14
339 TestNetworkPlugins/group/false/Localhost 0.13
340 TestNetworkPlugins/group/false/HairPin 0.14
341 TestNetworkPlugins/group/bridge/Start 76.56
342 TestNetworkPlugins/group/kubenet/Start 38.9
343 TestNetworkPlugins/group/kubenet/KubeletFlags 0.39
344 TestNetworkPlugins/group/kubenet/NetCatPod 13.23
345 TestNetworkPlugins/group/kubenet/DNS 0.13
346 TestNetworkPlugins/group/kubenet/Localhost 0.12
347 TestNetworkPlugins/group/kubenet/HairPin 0.13
348 TestNetworkPlugins/group/bridge/KubeletFlags 0.43
349 TestNetworkPlugins/group/bridge/NetCatPod 14.22
350 TestNetworkPlugins/group/bridge/DNS 0.16
351 TestNetworkPlugins/group/bridge/Localhost 0.14
352 TestNetworkPlugins/group/bridge/HairPin 0.14
356 TestStartStop/group/no-preload/serial/FirstStart 53.1
357 TestStartStop/group/no-preload/serial/DeployApp 12.24
358 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.13
359 TestStartStop/group/no-preload/serial/Stop 10.98
360 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.42
361 TestStartStop/group/no-preload/serial/SecondStart 335.1
364 TestStartStop/group/old-k8s-version/serial/Stop 1.54
365 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.44
367 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.01
368 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
369 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.32
370 TestStartStop/group/no-preload/serial/Pause 3.45
372 TestStartStop/group/embed-certs/serial/FirstStart 75.32
373 TestStartStop/group/embed-certs/serial/DeployApp 12.24
374 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.22
375 TestStartStop/group/embed-certs/serial/Stop 10.95
376 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.45
377 TestStartStop/group/embed-certs/serial/SecondStart 314.61
379 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 11.04
380 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.08
381 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.33
382 TestStartStop/group/embed-certs/serial/Pause 3.26
384 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 38.43
385 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 13.26
386 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.23
387 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.88
388 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.44
389 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 337.43
390 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 12.01
391 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
392 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.32
393 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.49
395 TestStartStop/group/newest-cni/serial/FirstStart 34.82
396 TestStartStop/group/newest-cni/serial/DeployApp 0
397 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.23
398 TestStartStop/group/newest-cni/serial/Stop 9.23
399 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.44
400 TestStartStop/group/newest-cni/serial/SecondStart 29.63
401 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
402 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
403 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.32
404 TestStartStop/group/newest-cni/serial/Pause 3.34
x
+
TestDownloadOnly/v1.16.0/json-events (21.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-554000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-554000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (21.20409778s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (21.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-554000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-554000: exit status 85 (290.373271ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-554000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST |          |
	|         | -p download-only-554000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 14:52:29
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 14:52:29.908178    6778 out.go:291] Setting OutFile to fd 1 ...
	I0213 14:52:29.908455    6778 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 14:52:29.908460    6778 out.go:304] Setting ErrFile to fd 2...
	I0213 14:52:29.908465    6778 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 14:52:29.908653    6778 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18169-6320/.minikube/bin
	W0213 14:52:29.908765    6778 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18169-6320/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18169-6320/.minikube/config/config.json: no such file or directory
	I0213 14:52:29.910727    6778 out.go:298] Setting JSON to true
	I0213 14:52:29.933932    6778 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1609,"bootTime":1707863140,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0213 14:52:29.934055    6778 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 14:52:29.954810    6778 out.go:97] [download-only-554000] minikube v1.32.0 on Darwin 14.3.1
	I0213 14:52:29.975795    6778 out.go:169] MINIKUBE_LOCATION=18169
	W0213 14:52:29.954948    6778 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball: no such file or directory
	I0213 14:52:29.954954    6778 notify.go:220] Checking for updates...
	I0213 14:52:30.018729    6778 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18169-6320/kubeconfig
	I0213 14:52:30.040025    6778 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0213 14:52:30.082842    6778 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 14:52:30.125671    6778 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18169-6320/.minikube
	W0213 14:52:30.167918    6778 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0213 14:52:30.168354    6778 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 14:52:30.223292    6778 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0213 14:52:30.223434    6778 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 14:52:30.328931    6778 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:107 SystemTime:2024-02-13 22:52:30.319238204 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 14:52:30.349848    6778 out.go:97] Using the docker driver based on user configuration
	I0213 14:52:30.349879    6778 start.go:298] selected driver: docker
	I0213 14:52:30.349892    6778 start.go:902] validating driver "docker" against <nil>
	I0213 14:52:30.350079    6778 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 14:52:30.456516    6778 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:107 SystemTime:2024-02-13 22:52:30.447123834 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 14:52:30.456698    6778 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 14:52:30.459991    6778 start_flags.go:392] Using suggested 5877MB memory alloc based on sys=32768MB, container=5925MB
	I0213 14:52:30.460125    6778 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0213 14:52:30.480972    6778 out.go:169] Using Docker Desktop driver with root privileges
	I0213 14:52:30.502089    6778 cni.go:84] Creating CNI manager for ""
	I0213 14:52:30.502128    6778 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0213 14:52:30.502144    6778 start_flags.go:321] config:
	{Name:download-only-554000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:5877 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-554000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 14:52:30.523746    6778 out.go:97] Starting control plane node download-only-554000 in cluster download-only-554000
	I0213 14:52:30.523775    6778 cache.go:121] Beginning downloading kic base image for docker with docker
	I0213 14:52:30.544953    6778 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0213 14:52:30.545014    6778 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0213 14:52:30.545101    6778 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0213 14:52:30.594982    6778 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0213 14:52:30.595211    6778 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0213 14:52:30.595344    6778 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0213 14:52:30.818398    6778 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0213 14:52:30.818419    6778 cache.go:56] Caching tarball of preloaded images
	I0213 14:52:30.818672    6778 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0213 14:52:30.840567    6778 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0213 14:52:30.840590    6778 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0213 14:52:31.389688    6778 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0213 14:52:46.736050    6778 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0213 14:52:46.736273    6778 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0213 14:52:47.301073    6778 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0213 14:52:47.301308    6778 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/download-only-554000/config.json ...
	I0213 14:52:47.301333    6778 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/download-only-554000/config.json: {Name:mka94da036612fbc0ce9f3724c8fd95c39de5651 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:52:47.301608    6778 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0213 14:52:47.301893    6778 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-554000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-554000
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (20.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-994000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-994000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker : (20.439683011s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (20.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-994000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-994000: exit status 85 (312.019893ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-554000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST |                     |
	|         | -p download-only-554000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	| delete  | -p download-only-554000        | download-only-554000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	| start   | -o=json --download-only        | download-only-994000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST |                     |
	|         | -p download-only-994000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 14:52:52
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 14:52:52.430624    6846 out.go:291] Setting OutFile to fd 1 ...
	I0213 14:52:52.430801    6846 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 14:52:52.430806    6846 out.go:304] Setting ErrFile to fd 2...
	I0213 14:52:52.430810    6846 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 14:52:52.431001    6846 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18169-6320/.minikube/bin
	I0213 14:52:52.432495    6846 out.go:298] Setting JSON to true
	I0213 14:52:52.455073    6846 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1632,"bootTime":1707863140,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0213 14:52:52.455190    6846 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 14:52:52.476891    6846 out.go:97] [download-only-994000] minikube v1.32.0 on Darwin 14.3.1
	I0213 14:52:52.498657    6846 out.go:169] MINIKUBE_LOCATION=18169
	I0213 14:52:52.477108    6846 notify.go:220] Checking for updates...
	I0213 14:52:52.542642    6846 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18169-6320/kubeconfig
	I0213 14:52:52.563554    6846 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0213 14:52:52.605506    6846 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 14:52:52.647471    6846 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18169-6320/.minikube
	W0213 14:52:52.689442    6846 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0213 14:52:52.689817    6846 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 14:52:52.745748    6846 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0213 14:52:52.745898    6846 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 14:52:52.849249    6846 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:107 SystemTime:2024-02-13 22:52:52.839612493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 14:52:52.871043    6846 out.go:97] Using the docker driver based on user configuration
	I0213 14:52:52.871076    6846 start.go:298] selected driver: docker
	I0213 14:52:52.871089    6846 start.go:902] validating driver "docker" against <nil>
	I0213 14:52:52.871297    6846 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 14:52:52.975803    6846 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:107 SystemTime:2024-02-13 22:52:52.965647509 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 14:52:52.975995    6846 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 14:52:52.979024    6846 start_flags.go:392] Using suggested 5877MB memory alloc based on sys=32768MB, container=5925MB
	I0213 14:52:52.979166    6846 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0213 14:52:53.000024    6846 out.go:169] Using Docker Desktop driver with root privileges
	I0213 14:52:53.022037    6846 cni.go:84] Creating CNI manager for ""
	I0213 14:52:53.022076    6846 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 14:52:53.022095    6846 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0213 14:52:53.022112    6846 start_flags.go:321] config:
	{Name:download-only-994000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:5877 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-994000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 14:52:53.043887    6846 out.go:97] Starting control plane node download-only-994000 in cluster download-only-994000
	I0213 14:52:53.043931    6846 cache.go:121] Beginning downloading kic base image for docker with docker
	I0213 14:52:53.066046    6846 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0213 14:52:53.066105    6846 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 14:52:53.066196    6846 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0213 14:52:53.116466    6846 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0213 14:52:53.116659    6846 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0213 14:52:53.116676    6846 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0213 14:52:53.116682    6846 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0213 14:52:53.116690    6846 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0213 14:52:53.325989    6846 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0213 14:52:53.326039    6846 cache.go:56] Caching tarball of preloaded images
	I0213 14:52:53.326374    6846 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 14:52:53.348157    6846 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0213 14:52:53.348215    6846 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0213 14:52:53.924509    6846 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0213 14:53:11.266288    6846 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0213 14:53:11.266470    6846 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0213 14:53:11.894790    6846 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0213 14:53:11.895025    6846 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/download-only-994000/config.json ...
	I0213 14:53:11.895049    6846 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/download-only-994000/config.json: {Name:mk551a9a7ff09eb04589d936729ac2822786e815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:53:11.895471    6846 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 14:53:11.895802    6846 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/darwin/amd64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-994000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-994000
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (18.5s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-764000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-764000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker : (18.499419024s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (18.50s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-764000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-764000: exit status 85 (290.759845ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-554000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST |                     |
	|         | -p download-only-554000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	| delete  | -p download-only-554000           | download-only-554000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	| start   | -o=json --download-only           | download-only-994000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST |                     |
	|         | -p download-only-994000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 13 Feb 24 14:53 PST | 13 Feb 24 14:53 PST |
	| delete  | -p download-only-994000           | download-only-994000 | jenkins | v1.32.0 | 13 Feb 24 14:53 PST | 13 Feb 24 14:53 PST |
	| start   | -o=json --download-only           | download-only-764000 | jenkins | v1.32.0 | 13 Feb 24 14:53 PST |                     |
	|         | -p download-only-764000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 14:53:14
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 14:53:14.202401    6916 out.go:291] Setting OutFile to fd 1 ...
	I0213 14:53:14.202554    6916 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 14:53:14.202558    6916 out.go:304] Setting ErrFile to fd 2...
	I0213 14:53:14.202562    6916 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 14:53:14.202755    6916 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18169-6320/.minikube/bin
	I0213 14:53:14.204231    6916 out.go:298] Setting JSON to true
	I0213 14:53:14.227189    6916 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1654,"bootTime":1707863140,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0213 14:53:14.227308    6916 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 14:53:14.249516    6916 out.go:97] [download-only-764000] minikube v1.32.0 on Darwin 14.3.1
	I0213 14:53:14.271309    6916 out.go:169] MINIKUBE_LOCATION=18169
	I0213 14:53:14.249748    6916 notify.go:220] Checking for updates...
	I0213 14:53:14.315309    6916 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18169-6320/kubeconfig
	I0213 14:53:14.336418    6916 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0213 14:53:14.358391    6916 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 14:53:14.380385    6916 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18169-6320/.minikube
	W0213 14:53:14.424465    6916 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0213 14:53:14.424975    6916 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 14:53:14.483433    6916 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0213 14:53:14.483579    6916 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 14:53:14.587383    6916 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:107 SystemTime:2024-02-13 22:53:14.576789139 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 14:53:14.608553    6916 out.go:97] Using the docker driver based on user configuration
	I0213 14:53:14.608595    6916 start.go:298] selected driver: docker
	I0213 14:53:14.608607    6916 start.go:902] validating driver "docker" against <nil>
	I0213 14:53:14.608805    6916 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 14:53:14.714013    6916 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:107 SystemTime:2024-02-13 22:53:14.704461141 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 14:53:14.714196    6916 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 14:53:14.717117    6916 start_flags.go:392] Using suggested 5877MB memory alloc based on sys=32768MB, container=5925MB
	I0213 14:53:14.717263    6916 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0213 14:53:14.738630    6916 out.go:169] Using Docker Desktop driver with root privileges
	I0213 14:53:14.759641    6916 cni.go:84] Creating CNI manager for ""
	I0213 14:53:14.759679    6916 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 14:53:14.759694    6916 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0213 14:53:14.759704    6916 start_flags.go:321] config:
	{Name:download-only-764000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:5877 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-764000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 14:53:14.780433    6916 out.go:97] Starting control plane node download-only-764000 in cluster download-only-764000
	I0213 14:53:14.780479    6916 cache.go:121] Beginning downloading kic base image for docker with docker
	I0213 14:53:14.802614    6916 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0213 14:53:14.802654    6916 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0213 14:53:14.802719    6916 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0213 14:53:14.852440    6916 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0213 14:53:14.852606    6916 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0213 14:53:14.852622    6916 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0213 14:53:14.852628    6916 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0213 14:53:14.852636    6916 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0213 14:53:15.056463    6916 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0213 14:53:15.056499    6916 cache.go:56] Caching tarball of preloaded images
	I0213 14:53:15.056818    6916 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0213 14:53:15.078775    6916 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0213 14:53:15.078820    6916 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0213 14:53:15.640506    6916 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:47acda482c3add5b56147c92b8d7f468 -> /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-764000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-764000
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnlyKic (2s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-502000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-502000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-502000
--- PASS: TestDownloadOnlyKic (2.00s)

                                                
                                    
x
+
TestBinaryMirror (1.65s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-306000 --alsologtostderr --binary-mirror http://127.0.0.1:52040 --driver=docker 
aaa_download_only_test.go:314: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-306000 --alsologtostderr --binary-mirror http://127.0.0.1:52040 --driver=docker : (1.032826712s)
helpers_test.go:175: Cleaning up "binary-mirror-306000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-306000
--- PASS: TestBinaryMirror (1.65s)

                                                
                                    
x
+
TestOffline (43.49s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-989000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-989000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (41.028663224s)
helpers_test.go:175: Cleaning up "offline-docker-989000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-989000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-989000: (2.461270949s)
--- PASS: TestOffline (43.49s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-441000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-441000: exit status 85 (194.002425ms)

                                                
                                                
-- stdout --
	* Profile "addons-441000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-441000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-441000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-441000: exit status 85 (217.1654ms)

                                                
                                                
-- stdout --
	* Profile "addons-441000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-441000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                    
x
+
TestAddons/Setup (338.21s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-441000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-darwin-amd64 start -p addons-441000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (5m38.205692184s)
--- PASS: TestAddons/Setup (338.21s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.03s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-kzg47" [4da896c2-2964-4333-b975-232ce3130e4f] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005490684s
addons_test.go:841: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-441000
addons_test.go:841: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-441000: (6.027290785s)
--- PASS: TestAddons/parallel/InspektorGadget (12.03s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.83s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.663502ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-q9pfw" [23cbda69-e999-4f68-aa9d-8a5d5a114798] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003762059s
addons_test.go:415: (dbg) Run:  kubectl --context addons-441000 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-darwin-amd64 -p addons-441000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.83s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.08s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.036925ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-znd7d" [05d50e28-3d56-452e-a372-97b9f9e2084b] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005122185s
addons_test.go:473: (dbg) Run:  kubectl --context addons-441000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-441000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.267421694s)
addons_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 -p addons-441000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.08s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.96s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 17.86286ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-441000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-441000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-441000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-441000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-441000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-441000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-441000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-441000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-441000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-441000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [9018aec1-a1af-4c62-bd5e-b51dccb70c1c] Pending
helpers_test.go:344: "task-pv-pod" [9018aec1-a1af-4c62-bd5e-b51dccb70c1c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [9018aec1-a1af-4c62-bd5e-b51dccb70c1c] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.007790226s
addons_test.go:584: (dbg) Run:  kubectl --context addons-441000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-441000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-441000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-441000 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-441000 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-441000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-441000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-441000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-441000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-441000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-441000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-441000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-441000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-441000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-441000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-441000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-441000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-441000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-441000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-441000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [83dd3f64-93c7-4f3d-94a5-e692ca0c89d7] Pending
helpers_test.go:344: "task-pv-pod-restore" [83dd3f64-93c7-4f3d-94a5-e692ca0c89d7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [83dd3f64-93c7-4f3d-94a5-e692ca0c89d7] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005484267s
addons_test.go:626: (dbg) Run:  kubectl --context addons-441000 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-441000 delete pod task-pv-pod-restore: (1.183471942s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-441000 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-441000 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-darwin-amd64 -p addons-441000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-darwin-amd64 -p addons-441000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.918143978s)
addons_test.go:642: (dbg) Run:  out/minikube-darwin-amd64 -p addons-441000 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-darwin-amd64 -p addons-441000 addons disable volumesnapshots --alsologtostderr -v=1: (1.024956558s)
--- PASS: TestAddons/parallel/CSI (49.96s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-441000 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-441000 --alsologtostderr -v=1: (1.536498441s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-df48m" [38d5dfec-4159-444e-bba1-3c2c440dd224] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-df48m" [38d5dfec-4159-444e-bba1-3c2c440dd224] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004304948s
--- PASS: TestAddons/parallel/Headlamp (14.54s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.88s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-fl2xz" [af4639ee-642d-431a-ad9e-d7b727694174] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.00463684s
addons_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-441000
--- PASS: TestAddons/parallel/CloudSpanner (6.88s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.03s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-441000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-441000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-441000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-441000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-441000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-441000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-441000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-441000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-441000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-441000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-441000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [6e79d3d6-482a-461e-ad5e-939825c1affc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [6e79d3d6-482a-461e-ad5e-939825c1affc] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [6e79d3d6-482a-461e-ad5e-939825c1affc] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.006003479s
addons_test.go:891: (dbg) Run:  kubectl --context addons-441000 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-darwin-amd64 -p addons-441000 ssh "cat /opt/local-path-provisioner/pvc-5975bfae-e10e-4665-a51e-55bed44785e4_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-441000 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-441000 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-darwin-amd64 -p addons-441000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-darwin-amd64 -p addons-441000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.042267633s)
--- PASS: TestAddons/parallel/LocalPath (57.03s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-z7zwp" [34ef5d83-a163-423e-ad13-3c7553d7af41] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005848686s
addons_test.go:955: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-441000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.67s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-znc85" [a65c6326-6809-42d0-961f-855e8c2eb397] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005909337s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-441000 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-441000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.78s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-441000
addons_test.go:172: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-441000: (11.068432364s)
addons_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-441000
addons_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-441000
addons_test.go:185: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-441000
--- PASS: TestAddons/StoppedEnableDisable (11.78s)

                                                
                                    
x
+
TestCertOptions (26.44s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-628000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
E0213 15:40:14.214343    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-628000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (23.181844831s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-628000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-628000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-628000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-628000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-628000: (2.414010932s)
--- PASS: TestCertOptions (26.44s)

                                                
                                    
x
+
TestCertExpiration (233.11s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-535000 --memory=2048 --cert-expiration=3m --driver=docker 
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-535000 --memory=2048 --cert-expiration=3m --driver=docker : (23.677853434s)
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-535000 --memory=2048 --cert-expiration=8760h --driver=docker 
E0213 15:43:17.268949    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-535000 --memory=2048 --cert-expiration=8760h --driver=docker : (26.672811094s)
helpers_test.go:175: Cleaning up "cert-expiration-535000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-535000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-535000: (2.756367556s)
--- PASS: TestCertExpiration (233.11s)

                                                
                                    
x
+
TestDockerFlags (28.34s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-487000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
docker_test.go:51: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-487000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (25.039397049s)
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-487000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-487000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-487000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-487000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-487000: (2.434246408s)
--- PASS: TestDockerFlags (28.34s)

                                                
                                    
x
+
TestForceSystemdFlag (27.21s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-854000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:91: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-854000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (23.864294021s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-854000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-854000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-854000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-854000: (2.883062776s)
--- PASS: TestForceSystemdFlag (27.21s)

                                                
                                    
x
+
TestForceSystemdEnv (27.45s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-528000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
E0213 15:39:17.238674    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-528000 --memory=2048 --alsologtostderr -v=5 --driver=docker : (24.402584207s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-528000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-528000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-528000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-528000: (2.551721715s)
--- PASS: TestForceSystemdEnv (27.45s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.75s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperKitDriverInstallOrUpdate (7.75s)

                                                
                                    
x
+
TestErrorSpam/setup (20.95s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-605000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-605000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-605000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-605000 --driver=docker : (20.946397391s)
--- PASS: TestErrorSpam/setup (20.95s)

                                                
                                    
x
+
TestErrorSpam/start (2.06s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-605000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-605000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-605000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-605000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-605000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-605000 start --dry-run
--- PASS: TestErrorSpam/start (2.06s)

                                                
                                    
x
+
TestErrorSpam/status (1.29s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-605000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-605000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-605000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-605000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-605000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-605000 status
--- PASS: TestErrorSpam/status (1.29s)

                                                
                                    
x
+
TestErrorSpam/pause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-605000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-605000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-605000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-605000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-605000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-605000 pause
--- PASS: TestErrorSpam/pause (1.73s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.89s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-605000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-605000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-605000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-605000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-605000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-605000 unpause
--- PASS: TestErrorSpam/unpause (1.89s)

                                                
                                    
x
+
TestErrorSpam/stop (2.82s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-605000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-605000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-605000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-605000 stop: (2.172455156s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-605000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-605000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-605000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-605000 stop
--- PASS: TestErrorSpam/stop (2.82s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/test/nested/copy/6776/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (75.48s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-443000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-443000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (1m15.476674395s)
--- PASS: TestFunctional/serial/StartWithProxy (75.48s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.58s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-443000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-443000 --alsologtostderr -v=8: (36.581873249s)
functional_test.go:659: soft start took 36.582355653s for "functional-443000" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.58s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-443000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (9.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-443000 cache add registry.k8s.io/pause:3.1: (3.60188332s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-443000 cache add registry.k8s.io/pause:3.3: (3.655283613s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-443000 cache add registry.k8s.io/pause:latest: (2.64367371s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (9.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-443000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3273905728/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 cache add minikube-local-cache-test:functional-443000
E0213 15:04:17.234492    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
E0213 15:04:17.241626    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
E0213 15:04:17.252421    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
E0213 15:04:17.272650    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
E0213 15:04:17.312994    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
E0213 15:04:17.393369    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
E0213 15:04:17.554087    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-443000 cache add minikube-local-cache-test:functional-443000: (1.067913219s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 cache delete minikube-local-cache-test:functional-443000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-443000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
E0213 15:04:17.875395    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (3.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 ssh sudo docker rmi registry.k8s.io/pause:latest
E0213 15:04:18.515614    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-443000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (400.012324ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 cache reload
E0213 15:04:19.796025    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
functional_test.go:1154: (dbg) Done: out/minikube-darwin-amd64 -p functional-443000 cache reload: (2.164593013s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (3.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 kubectl -- --context functional-443000 get pods
E0213 15:04:22.358176    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
functional_test.go:712: (dbg) Done: out/minikube-darwin-amd64 -p functional-443000 kubectl -- --context functional-443000 get pods: (1.149719916s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-443000 get pods
functional_test.go:737: (dbg) Done: out/kubectl --context functional-443000 get pods: (1.564574472s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.57s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.81s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-443000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0213 15:04:27.478903    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
E0213 15:04:37.719331    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
E0213 15:04:58.200297    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-443000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.8137738s)
functional_test.go:757: restart took 36.813914729s for "functional-443000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.81s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-443000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.15s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-443000 logs: (3.153411317s)
--- PASS: TestFunctional/serial/LogsCmd (3.15s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd2820261430/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-443000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd2820261430/001/logs.txt: (3.387171378s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.39s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.2s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-443000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-443000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-443000: exit status 115 (572.229743ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32753 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-443000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.20s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-443000 config get cpus: exit status 14 (61.386834ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-443000 config get cpus: exit status 14 (63.288079ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-443000 --alsologtostderr -v=1]
2024/02/13 15:06:19 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-443000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 9057: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.19s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-443000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-443000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (718.422421ms)

                                                
                                                
-- stdout --
	* [functional-443000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18169
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18169-6320/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18169-6320/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:06:03.652909    8980 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:06:03.653212    8980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:06:03.653218    8980 out.go:304] Setting ErrFile to fd 2...
	I0213 15:06:03.653223    8980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:06:03.653486    8980 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18169-6320/.minikube/bin
	I0213 15:06:03.655392    8980 out.go:298] Setting JSON to false
	I0213 15:06:03.678980    8980 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":2423,"bootTime":1707863140,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0213 15:06:03.679171    8980 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:06:03.700752    8980 out.go:177] * [functional-443000] minikube v1.32.0 on Darwin 14.3.1
	I0213 15:06:03.780274    8980 out.go:177]   - MINIKUBE_LOCATION=18169
	I0213 15:06:03.759667    8980 notify.go:220] Checking for updates...
	I0213 15:06:03.841491    8980 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18169-6320/kubeconfig
	I0213 15:06:03.883437    8980 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0213 15:06:03.904476    8980 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:06:03.925334    8980 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18169-6320/.minikube
	I0213 15:06:03.946479    8980 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:06:03.968153    8980 config.go:182] Loaded profile config "functional-443000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:06:03.968929    8980 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:06:04.028291    8980 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0213 15:06:04.028449    8980 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 15:06:04.140171    8980 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:false NGoroutines:116 SystemTime:2024-02-13 23:06:04.130676918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 15:06:04.182535    8980 out.go:177] * Using the docker driver based on existing profile
	I0213 15:06:04.203481    8980 start.go:298] selected driver: docker
	I0213 15:06:04.203505    8980 start.go:902] validating driver "docker" against &{Name:functional-443000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-443000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:06:04.203619    8980 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:06:04.228227    8980 out.go:177] 
	W0213 15:06:04.249525    8980 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0213 15:06:04.270420    8980 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-443000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-443000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-443000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (720.075204ms)

                                                
                                                
-- stdout --
	* [functional-443000] minikube v1.32.0 sur Darwin 14.3.1
	  - MINIKUBE_LOCATION=18169
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18169-6320/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18169-6320/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:06:02.925978    8962 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:06:02.926122    8962 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:06:02.926127    8962 out.go:304] Setting ErrFile to fd 2...
	I0213 15:06:02.926131    8962 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:06:02.926327    8962 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18169-6320/.minikube/bin
	I0213 15:06:02.928369    8962 out.go:298] Setting JSON to false
	I0213 15:06:02.953147    8962 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":2422,"bootTime":1707863140,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0213 15:06:02.953258    8962 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:06:02.977317    8962 out.go:177] * [functional-443000] minikube v1.32.0 sur Darwin 14.3.1
	I0213 15:06:03.047363    8962 out.go:177]   - MINIKUBE_LOCATION=18169
	I0213 15:06:03.025420    8962 notify.go:220] Checking for updates...
	I0213 15:06:03.091324    8962 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18169-6320/kubeconfig
	I0213 15:06:03.112303    8962 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0213 15:06:03.133163    8962 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:06:03.175199    8962 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18169-6320/.minikube
	I0213 15:06:03.217187    8962 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:06:03.239225    8962 config.go:182] Loaded profile config "functional-443000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:06:03.239972    8962 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:06:03.303880    8962 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0213 15:06:03.304036    8962 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 15:06:03.417092    8962 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:false NGoroutines:116 SystemTime:2024-02-13 23:06:03.406698691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0213 15:06:03.459493    8962 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0213 15:06:03.480507    8962 start.go:298] selected driver: docker
	I0213 15:06:03.480533    8962 start.go:902] validating driver "docker" against &{Name:functional-443000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-443000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:06:03.480661    8962 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:06:03.506606    8962 out.go:177] 
	W0213 15:06:03.529807    8962 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0213 15:06:03.551526    8962 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [a83261da-6cf4-4625-a789-10a51c835244] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003649925s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-443000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-443000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-443000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-443000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [38bd5df3-8794-4c6f-8ba7-d6b6c6c75812] Pending
helpers_test.go:344: "sp-pod" [38bd5df3-8794-4c6f-8ba7-d6b6c6c75812] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [38bd5df3-8794-4c6f-8ba7-d6b6c6c75812] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003629503s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-443000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-443000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-443000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9faea134-0d11-4a23-a296-eff24530ffc5] Pending
helpers_test.go:344: "sp-pod" [9faea134-0d11-4a23-a296-eff24530ffc5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9faea134-0d11-4a23-a296-eff24530ffc5] Running
E0213 15:05:39.160695    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004942858s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-443000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.55s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 ssh -n functional-443000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 cp functional-443000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelCpCmd3159778328/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 ssh -n functional-443000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 ssh -n functional-443000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.85s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (108.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-443000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-8pwmf" [d6020fcd-0a16-45b3-9a6b-c35b837fd82e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-8pwmf" [d6020fcd-0a16-45b3-9a6b-c35b837fd82e] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 1m45.003829172s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-443000 exec mysql-859648c796-8pwmf -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-443000 exec mysql-859648c796-8pwmf -- mysql -ppassword -e "show databases;": exit status 1 (152.580836ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-443000 exec mysql-859648c796-8pwmf -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-443000 exec mysql-859648c796-8pwmf -- mysql -ppassword -e "show databases;": exit status 1 (121.085002ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-443000 exec mysql-859648c796-8pwmf -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (108.95s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/6776/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 ssh "sudo cat /etc/test/nested/copy/6776/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/6776.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 ssh "sudo cat /etc/ssl/certs/6776.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/6776.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 ssh "sudo cat /usr/share/ca-certificates/6776.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/67762.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 ssh "sudo cat /etc/ssl/certs/67762.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/67762.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 ssh "sudo cat /usr/share/ca-certificates/67762.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.47s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-443000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-443000 ssh "sudo systemctl is-active crio": exit status 1 (400.567927ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
functional_test.go:2284: (dbg) Done: out/minikube-darwin-amd64 license: (1.390968616s)
--- PASS: TestFunctional/parallel/License (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-443000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-443000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-443000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 8502: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-443000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-443000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-443000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [ff1b4c56-a24a-4cb3-88ec-28aaf6dd6408] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [ff1b4c56-a24a-4cb3-88ec-28aaf6dd6408] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004557846s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-443000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-443000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 8557: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-443000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-443000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-5lf9v" [12492752-82f6-458a-bc53-41384e4c73af] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-5lf9v" [12492752-82f6-458a-bc53-41384e4c73af] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.00364602s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "422.937421ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "80.147353ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "418.83401ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "80.755471ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-443000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port4021598601/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1707865545188459000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port4021598601/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1707865545188459000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port4021598601/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1707865545188459000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port4021598601/001/test-1707865545188459000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-443000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (400.981217ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 13 23:05 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 13 23:05 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 13 23:05 test-1707865545188459000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 ssh cat /mount-9p/test-1707865545188459000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-443000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [a3986a30-206c-48e2-afb8-aa604004fc17] Pending
helpers_test.go:344: "busybox-mount" [a3986a30-206c-48e2-afb8-aa604004fc17] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [a3986a30-206c-48e2-afb8-aa604004fc17] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [a3986a30-206c-48e2-afb8-aa604004fc17] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.00408015s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-443000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-443000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port4021598601/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 service list -o json
functional_test.go:1490: Took "626.891907ms" to run "out/minikube-darwin-amd64 -p functional-443000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-443000 service --namespace=default --https --url hello-node: signal: killed (15.004189928s)

                                                
                                                
-- stdout --
	https://127.0.0.1:52980

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1518: found endpoint: https://127.0.0.1:52980
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-443000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port3600824357/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-443000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (397.889714ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-443000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port3600824357/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-443000 ssh "sudo umount -f /mount-9p": exit status 1 (370.628281ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-443000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-443000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port3600824357/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-443000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2361111624/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-443000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2361111624/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-443000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2361111624/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-443000 ssh "findmnt -T" /mount1: exit status 1 (498.978864ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-443000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-443000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2361111624/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-443000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2361111624/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-443000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2361111624/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-443000 service hello-node --url --format={{.IP}}: signal: killed (15.00248234s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-443000 service hello-node --url: signal: killed (15.004391381s)

                                                
                                                
-- stdout --
	http://127.0.0.1:53085

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1561: found endpoint for hello-node: http://127.0.0.1:53085
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-darwin-amd64 -p functional-443000 version -o=json --components: (1.000305076s)
--- PASS: TestFunctional/parallel/Version/components (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-443000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-443000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-443000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-443000 image ls --format short --alsologtostderr:
I0213 15:06:50.310501    9374 out.go:291] Setting OutFile to fd 1 ...
I0213 15:06:50.311119    9374 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 15:06:50.311126    9374 out.go:304] Setting ErrFile to fd 2...
I0213 15:06:50.311133    9374 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 15:06:50.311622    9374 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18169-6320/.minikube/bin
I0213 15:06:50.312472    9374 config.go:182] Loaded profile config "functional-443000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 15:06:50.312568    9374 config.go:182] Loaded profile config "functional-443000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 15:06:50.312941    9374 cli_runner.go:164] Run: docker container inspect functional-443000 --format={{.State.Status}}
I0213 15:06:50.366066    9374 ssh_runner.go:195] Run: systemctl --version
I0213 15:06:50.366149    9374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-443000
I0213 15:06:50.419549    9374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52728 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/functional-443000/id_rsa Username:docker}
I0213 15:06:50.512600    9374 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-443000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-443000 | 54e38ab82df4b | 30B    |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/google-containers/addon-resizer      | functional-443000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/localhost/my-image                | functional-443000 | ca7d1d63d76d8 | 1.24MB |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| docker.io/library/nginx                     | latest            | 247f7abff9f70 | 187MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| docker.io/library/nginx                     | alpine            | 2b70e4aaac6b5 | 42.6MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-443000 image ls --format table --alsologtostderr:
I0213 15:06:56.770855    9414 out.go:291] Setting OutFile to fd 1 ...
I0213 15:06:56.771217    9414 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 15:06:56.771222    9414 out.go:304] Setting ErrFile to fd 2...
I0213 15:06:56.771226    9414 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 15:06:56.771427    9414 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18169-6320/.minikube/bin
I0213 15:06:56.772058    9414 config.go:182] Loaded profile config "functional-443000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 15:06:56.772147    9414 config.go:182] Loaded profile config "functional-443000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 15:06:56.772536    9414 cli_runner.go:164] Run: docker container inspect functional-443000 --format={{.State.Status}}
I0213 15:06:56.823189    9414 ssh_runner.go:195] Run: systemctl --version
I0213 15:06:56.823264    9414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-443000
I0213 15:06:56.874815    9414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52728 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/functional-443000/id_rsa Username:docker}
I0213 15:06:56.969263    9414 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
E0213 15:07:01.082017    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-443000 image ls --format json --alsologtostderr:
[{"id":"ca7d1d63d76d86243a7a78c770ff9485197cb5c6c373692c1dcd46aace7b5f7a","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-443000"],"size":"1240000"},{"id":"2b70e4aaac6b5370bf3a556f5e13156692351696dd5d7c5530d117aa21772748","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-443000"],"size":"32900000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"54e38ab82df4bc523d33658f08da76458868fd3cebd17948b5acfa30d4be4fe6","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-443000"],"size":"30"},{"id":"e3db313c6dbc065d4ac3b32
c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDiges
ts":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"247f7abff9f7097bbdab57df76fedd124d1e24a6ec4944fb5ef0ad128997ce05","repoDigests":[],"repoTags":["docker.io/libra
ry/nginx:latest"],"size":"187000000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-443000 image ls --format json --alsologtostderr:
I0213 15:06:56.467281    9408 out.go:291] Setting OutFile to fd 1 ...
I0213 15:06:56.467965    9408 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 15:06:56.467974    9408 out.go:304] Setting ErrFile to fd 2...
I0213 15:06:56.467980    9408 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 15:06:56.468624    9408 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18169-6320/.minikube/bin
I0213 15:06:56.469280    9408 config.go:182] Loaded profile config "functional-443000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 15:06:56.469375    9408 config.go:182] Loaded profile config "functional-443000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 15:06:56.469775    9408 cli_runner.go:164] Run: docker container inspect functional-443000 --format={{.State.Status}}
I0213 15:06:56.521130    9408 ssh_runner.go:195] Run: systemctl --version
I0213 15:06:56.521214    9408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-443000
I0213 15:06:56.571623    9408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52728 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/functional-443000/id_rsa Username:docker}
I0213 15:06:56.667388    9408 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-443000 image ls --format yaml --alsologtostderr:
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: 2b70e4aaac6b5370bf3a556f5e13156692351696dd5d7c5530d117aa21772748
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-443000
size: "32900000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: ca7d1d63d76d86243a7a78c770ff9485197cb5c6c373692c1dcd46aace7b5f7a
repoDigests: []
repoTags:
- docker.io/localhost/my-image:functional-443000
size: "1240000"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: 247f7abff9f7097bbdab57df76fedd124d1e24a6ec4944fb5ef0ad128997ce05
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 54e38ab82df4bc523d33658f08da76458868fd3cebd17948b5acfa30d4be4fe6
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-443000
size: "30"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-443000 image ls --format yaml --alsologtostderr:
I0213 15:06:56.158919    9402 out.go:291] Setting OutFile to fd 1 ...
I0213 15:06:56.161809    9402 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 15:06:56.161815    9402 out.go:304] Setting ErrFile to fd 2...
I0213 15:06:56.161821    9402 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 15:06:56.162011    9402 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18169-6320/.minikube/bin
I0213 15:06:56.162770    9402 config.go:182] Loaded profile config "functional-443000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 15:06:56.162862    9402 config.go:182] Loaded profile config "functional-443000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 15:06:56.163241    9402 cli_runner.go:164] Run: docker container inspect functional-443000 --format={{.State.Status}}
I0213 15:06:56.215367    9402 ssh_runner.go:195] Run: systemctl --version
I0213 15:06:56.215445    9402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-443000
I0213 15:06:56.267748    9402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52728 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/functional-443000/id_rsa Username:docker}
I0213 15:06:56.362221    9402 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-443000 ssh pgrep buildkitd: exit status 1 (368.12051ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 image build -t localhost/my-image:functional-443000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-443000 image build -t localhost/my-image:functional-443000 testdata/build --alsologtostderr: (4.872342331s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-443000 image build -t localhost/my-image:functional-443000 testdata/build --alsologtostderr:
I0213 15:06:50.983292    9390 out.go:291] Setting OutFile to fd 1 ...
I0213 15:06:50.983541    9390 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 15:06:50.983562    9390 out.go:304] Setting ErrFile to fd 2...
I0213 15:06:50.983568    9390 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 15:06:50.983937    9390 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18169-6320/.minikube/bin
I0213 15:06:50.984688    9390 config.go:182] Loaded profile config "functional-443000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 15:06:50.985370    9390 config.go:182] Loaded profile config "functional-443000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 15:06:50.985795    9390 cli_runner.go:164] Run: docker container inspect functional-443000 --format={{.State.Status}}
I0213 15:06:51.039969    9390 ssh_runner.go:195] Run: systemctl --version
I0213 15:06:51.040041    9390 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-443000
I0213 15:06:51.092574    9390 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52728 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/functional-443000/id_rsa Username:docker}
I0213 15:06:51.185956    9390 build_images.go:151] Building image from path: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.2560105581.tar
I0213 15:06:51.186046    9390 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0213 15:06:51.200574    9390 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2560105581.tar
I0213 15:06:51.205058    9390 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2560105581.tar: stat -c "%s %y" /var/lib/minikube/build/build.2560105581.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2560105581.tar': No such file or directory
I0213 15:06:51.205086    9390 ssh_runner.go:362] scp /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.2560105581.tar --> /var/lib/minikube/build/build.2560105581.tar (3072 bytes)
I0213 15:06:51.247862    9390 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2560105581
I0213 15:06:51.263392    9390 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2560105581 -xf /var/lib/minikube/build/build.2560105581.tar
I0213 15:06:51.278441    9390 docker.go:360] Building image: /var/lib/minikube/build/build.2560105581
I0213 15:06:51.278553    9390 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-443000 /var/lib/minikube/build/build.2560105581
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 2.5s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 1.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:ca7d1d63d76d86243a7a78c770ff9485197cb5c6c373692c1dcd46aace7b5f7a done
#8 naming to localhost/my-image:functional-443000 done
#8 DONE 0.0s
I0213 15:06:55.734255    9390 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-443000 /var/lib/minikube/build/build.2560105581: (4.455740371s)
I0213 15:06:55.734318    9390 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2560105581
I0213 15:06:55.751780    9390 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2560105581.tar
I0213 15:06:55.766742    9390 build_images.go:207] Built localhost/my-image:functional-443000 from /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.2560105581.tar
I0213 15:06:55.766782    9390 build_images.go:123] succeeded building to: functional-443000
I0213 15:06:55.766786    9390 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (5.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (5.551015987s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-443000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (5.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 image load --daemon gcr.io/google-containers/addon-resizer:functional-443000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-443000 image load --daemon gcr.io/google-containers/addon-resizer:functional-443000 --alsologtostderr: (3.27120848s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 image load --daemon gcr.io/google-containers/addon-resizer:functional-443000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-443000 image load --daemon gcr.io/google-containers/addon-resizer:functional-443000 --alsologtostderr: (2.068143238s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (5.326395599s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-443000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 image load --daemon gcr.io/google-containers/addon-resizer:functional-443000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-443000 image load --daemon gcr.io/google-containers/addon-resizer:functional-443000 --alsologtostderr: (3.267543445s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.97s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-443000 docker-env) && out/minikube-darwin-amd64 status -p functional-443000"
functional_test.go:495: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-443000 docker-env) && out/minikube-darwin-amd64 status -p functional-443000": (1.067594433s)
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-443000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 image save gcr.io/google-containers/addon-resizer:functional-443000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-443000 image save gcr.io/google-containers/addon-resizer:functional-443000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.549944563s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 image rm gcr.io/google-containers/addon-resizer:functional-443000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-443000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (2.194329908s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-443000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-443000 image save --daemon gcr.io/google-containers/addon-resizer:functional-443000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-443000 image save --daemon gcr.io/google-containers/addon-resizer:functional-443000 --alsologtostderr: (1.439259851s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-443000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.56s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-443000
--- PASS: TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-443000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-443000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (21.78s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-189000 --driver=docker 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-189000 --driver=docker : (21.78440895s)
--- PASS: TestImageBuild/serial/Setup (21.78s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (4.33s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-189000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-189000: (4.327857121s)
--- PASS: TestImageBuild/serial/NormalBuild (4.33s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.22s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-189000
image_test.go:99: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-189000: (1.221749311s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.22s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.05s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-189000
image_test.go:133: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-189000: (1.051951909s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.05s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.03s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-189000
image_test.go:88: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-189000: (1.02500414s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.03s)

                                                
                                    
x
+
TestJSONOutput/start/Command (37.15s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-199000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-199000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (37.153665335s)
--- PASS: TestJSONOutput/start/Command (37.15s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-199000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-199000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.79s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-199000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-199000 --output=json --user=testUser: (10.794809909s)
--- PASS: TestJSONOutput/stop/Command (10.79s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.8s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-336000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-336000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (425.912127ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5ee52921-edde-4ee4-b60c-843db216a48f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-336000] minikube v1.32.0 on Darwin 14.3.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d7c78735-0c91-4170-b9ed-148cb81b1eeb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18169"}}
	{"specversion":"1.0","id":"832f6925-e32a-4895-a728-4a943529246e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18169-6320/kubeconfig"}}
	{"specversion":"1.0","id":"25a8c7ad-d9f8-44ac-a094-e6eefe272225","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"c9897dad-214c-4001-aaf1-d1bb644311f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"80183d91-18e2-44b2-9bd9-84c5ccd897eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18169-6320/.minikube"}}
	{"specversion":"1.0","id":"48dab4b1-1dcf-4b86-8ae8-0647b65e2bc2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"48a7ec9a-def3-44b8-8354-77c0b7edc6a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-336000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-336000
--- PASS: TestErrorJSONOutput (0.80s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (24.13s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-182000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-182000 --network=: (21.660493289s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-182000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-182000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-182000: (2.41578178s)
--- PASS: TestKicCustomNetwork/create_custom_network (24.13s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.49s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-882000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-882000 --network=bridge: (22.197407081s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-882000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-882000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-882000: (2.237425874s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.49s)

                                                
                                    
x
+
TestKicExistingNetwork (24.51s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-397000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-397000 --network=existing-network: (21.904260831s)
helpers_test.go:175: Cleaning up "existing-network-397000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-397000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-397000: (2.262460378s)
--- PASS: TestKicExistingNetwork (24.51s)

                                                
                                    
x
+
TestKicCustomSubnet (23.9s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-133000 --subnet=192.168.60.0/24
E0213 15:19:17.232766    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-133000 --subnet=192.168.60.0/24: (21.43425681s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-133000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-133000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-133000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-133000: (2.412033553s)
--- PASS: TestKicCustomSubnet (23.90s)

                                                
                                    
x
+
TestKicStaticIP (24.78s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-954000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-954000 --static-ip=192.168.200.200: (22.146920439s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-954000 ip
helpers_test.go:175: Cleaning up "static-ip-954000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-954000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-954000: (2.398459382s)
--- PASS: TestKicStaticIP (24.78s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (51.87s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-625000 --driver=docker 
E0213 15:20:14.208906    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-625000 --driver=docker : (22.060951804s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-627000 --driver=docker 
E0213 15:20:40.281875    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-627000 --driver=docker : (23.160148476s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-625000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-627000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-627000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-627000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-627000: (2.46195684s)
helpers_test.go:175: Cleaning up "first-625000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-625000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-625000: (2.436790704s)
--- PASS: TestMinikubeProfile (51.87s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.86s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-592000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-592000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.861597119s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-592000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.97s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-610000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-610000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.965252892s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-610000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.07s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-592000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-592000 --alsologtostderr -v=5: (2.071613355s)
--- PASS: TestMountStart/serial/DeleteFirst (2.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-610000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.55s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-610000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-610000: (1.552424347s)
--- PASS: TestMountStart/serial/Stop (1.55s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.88s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-610000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-610000: (7.881092806s)
--- PASS: TestMountStart/serial/RestartStopped (8.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-610000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (64.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-727000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
multinode_test.go:86: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-727000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m3.539418265s)
multinode_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (64.34s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (47.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-727000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-727000 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-727000 -- rollout status deployment/busybox: (6.695833282s)
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-727000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-727000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-727000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-727000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-727000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-727000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-727000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-727000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-727000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-727000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-727000 -- exec busybox-5b5d89c9d6-ck4xc -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-727000 -- exec busybox-5b5d89c9d6-xf7z8 -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-727000 -- exec busybox-5b5d89c9d6-ck4xc -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-727000 -- exec busybox-5b5d89c9d6-xf7z8 -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-727000 -- exec busybox-5b5d89c9d6-ck4xc -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-727000 -- exec busybox-5b5d89c9d6-xf7z8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (47.14s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-727000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-727000 -- exec busybox-5b5d89c9d6-ck4xc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-727000 -- exec busybox-5b5d89c9d6-ck4xc -- sh -c "ping -c 1 192.168.65.254"
multinode_test.go:588: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-727000 -- exec busybox-5b5d89c9d6-xf7z8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-727000 -- exec busybox-5b5d89c9d6-xf7z8 -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-727000 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-727000 -v 3 --alsologtostderr: (15.359194799s)
multinode_test.go:117: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 status --alsologtostderr
multinode_test.go:117: (dbg) Done: out/minikube-darwin-amd64 -p multinode-727000 status --alsologtostderr: (1.073633199s)
--- PASS: TestMultiNode/serial/AddNode (16.43s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-727000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.49s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (14.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 status --output json --alsologtostderr
multinode_test.go:174: (dbg) Done: out/minikube-darwin-amd64 -p multinode-727000 status --output json --alsologtostderr: (1.003453207s)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 cp testdata/cp-test.txt multinode-727000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 ssh -n multinode-727000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 cp multinode-727000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile2484933053/001/cp-test_multinode-727000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 ssh -n multinode-727000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 cp multinode-727000:/home/docker/cp-test.txt multinode-727000-m02:/home/docker/cp-test_multinode-727000_multinode-727000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 ssh -n multinode-727000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 ssh -n multinode-727000-m02 "sudo cat /home/docker/cp-test_multinode-727000_multinode-727000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 cp multinode-727000:/home/docker/cp-test.txt multinode-727000-m03:/home/docker/cp-test_multinode-727000_multinode-727000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 ssh -n multinode-727000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 ssh -n multinode-727000-m03 "sudo cat /home/docker/cp-test_multinode-727000_multinode-727000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 cp testdata/cp-test.txt multinode-727000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 ssh -n multinode-727000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 cp multinode-727000-m02:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile2484933053/001/cp-test_multinode-727000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 ssh -n multinode-727000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 cp multinode-727000-m02:/home/docker/cp-test.txt multinode-727000:/home/docker/cp-test_multinode-727000-m02_multinode-727000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 ssh -n multinode-727000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 ssh -n multinode-727000 "sudo cat /home/docker/cp-test_multinode-727000-m02_multinode-727000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 cp multinode-727000-m02:/home/docker/cp-test.txt multinode-727000-m03:/home/docker/cp-test_multinode-727000-m02_multinode-727000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 ssh -n multinode-727000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 ssh -n multinode-727000-m03 "sudo cat /home/docker/cp-test_multinode-727000-m02_multinode-727000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 cp testdata/cp-test.txt multinode-727000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 ssh -n multinode-727000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 cp multinode-727000-m03:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile2484933053/001/cp-test_multinode-727000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 ssh -n multinode-727000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 cp multinode-727000-m03:/home/docker/cp-test.txt multinode-727000:/home/docker/cp-test_multinode-727000-m03_multinode-727000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 ssh -n multinode-727000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 ssh -n multinode-727000 "sudo cat /home/docker/cp-test_multinode-727000-m03_multinode-727000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 cp multinode-727000-m03:/home/docker/cp-test.txt multinode-727000-m02:/home/docker/cp-test_multinode-727000-m03_multinode-727000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 ssh -n multinode-727000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 ssh -n multinode-727000-m02 "sudo cat /home/docker/cp-test_multinode-727000-m03_multinode-727000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (14.65s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-darwin-amd64 -p multinode-727000 node stop m03: (1.492763864s)
multinode_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-727000 status: exit status 7 (751.165281ms)

                                                
                                                
-- stdout --
	multinode-727000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-727000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-727000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-727000 status --alsologtostderr: exit status 7 (754.000838ms)

                                                
                                                
-- stdout --
	multinode-727000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-727000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-727000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:23:48.460838   12491 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:23:48.461038   12491 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:23:48.461045   12491 out.go:304] Setting ErrFile to fd 2...
	I0213 15:23:48.461049   12491 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:23:48.461263   12491 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18169-6320/.minikube/bin
	I0213 15:23:48.461471   12491 out.go:298] Setting JSON to false
	I0213 15:23:48.461497   12491 mustload.go:65] Loading cluster: multinode-727000
	I0213 15:23:48.461539   12491 notify.go:220] Checking for updates...
	I0213 15:23:48.461819   12491 config.go:182] Loaded profile config "multinode-727000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:23:48.461832   12491 status.go:255] checking status of multinode-727000 ...
	I0213 15:23:48.462249   12491 cli_runner.go:164] Run: docker container inspect multinode-727000 --format={{.State.Status}}
	I0213 15:23:48.514530   12491 status.go:330] multinode-727000 host status = "Running" (err=<nil>)
	I0213 15:23:48.514599   12491 host.go:66] Checking if "multinode-727000" exists ...
	I0213 15:23:48.514872   12491 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-727000
	I0213 15:23:48.566646   12491 host.go:66] Checking if "multinode-727000" exists ...
	I0213 15:23:48.566935   12491 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0213 15:23:48.567006   12491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-727000
	I0213 15:23:48.619766   12491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53591 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/multinode-727000/id_rsa Username:docker}
	I0213 15:23:48.713620   12491 ssh_runner.go:195] Run: systemctl --version
	I0213 15:23:48.718011   12491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 15:23:48.734572   12491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-727000
	I0213 15:23:48.789793   12491 kubeconfig.go:92] found "multinode-727000" server: "https://127.0.0.1:53590"
	I0213 15:23:48.789818   12491 api_server.go:166] Checking apiserver status ...
	I0213 15:23:48.789854   12491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 15:23:48.807105   12491 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2250/cgroup
	W0213 15:23:48.823343   12491 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2250/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0213 15:23:48.823414   12491 ssh_runner.go:195] Run: ls
	I0213 15:23:48.827750   12491 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:53590/healthz ...
	I0213 15:23:48.832677   12491 api_server.go:279] https://127.0.0.1:53590/healthz returned 200:
	ok
	I0213 15:23:48.832691   12491 status.go:421] multinode-727000 apiserver status = Running (err=<nil>)
	I0213 15:23:48.832699   12491 status.go:257] multinode-727000 status: &{Name:multinode-727000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0213 15:23:48.832720   12491 status.go:255] checking status of multinode-727000-m02 ...
	I0213 15:23:48.832953   12491 cli_runner.go:164] Run: docker container inspect multinode-727000-m02 --format={{.State.Status}}
	I0213 15:23:48.885103   12491 status.go:330] multinode-727000-m02 host status = "Running" (err=<nil>)
	I0213 15:23:48.885128   12491 host.go:66] Checking if "multinode-727000-m02" exists ...
	I0213 15:23:48.885379   12491 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-727000-m02
	I0213 15:23:48.936944   12491 host.go:66] Checking if "multinode-727000-m02" exists ...
	I0213 15:23:48.937177   12491 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0213 15:23:48.937228   12491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-727000-m02
	I0213 15:23:48.990958   12491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53629 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/multinode-727000-m02/id_rsa Username:docker}
	I0213 15:23:49.085398   12491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 15:23:49.102296   12491 status.go:257] multinode-727000-m02 status: &{Name:multinode-727000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0213 15:23:49.102315   12491 status.go:255] checking status of multinode-727000-m03 ...
	I0213 15:23:49.102541   12491 cli_runner.go:164] Run: docker container inspect multinode-727000-m03 --format={{.State.Status}}
	I0213 15:23:49.153757   12491 status.go:330] multinode-727000-m03 host status = "Stopped" (err=<nil>)
	I0213 15:23:49.153782   12491 status.go:343] host is not running, skipping remaining checks
	I0213 15:23:49.153791   12491 status.go:257] multinode-727000-m03 status: &{Name:multinode-727000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.00s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (14.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-727000 node start m03 --alsologtostderr: (13.183764481s)
multinode_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (14.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (121.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-727000
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-727000
E0213 15:24:17.226931    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
multinode_test.go:318: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-727000: (22.951845548s)
multinode_test.go:323: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-727000 --wait=true -v=8 --alsologtostderr
E0213 15:25:14.202865    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-727000 --wait=true -v=8 --alsologtostderr: (1m38.50091792s)
multinode_test.go:328: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-727000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (121.58s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-darwin-amd64 -p multinode-727000 node delete m03: (5.113032188s)
multinode_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.99s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 stop
multinode_test.go:342: (dbg) Done: out/minikube-darwin-amd64 -p multinode-727000 stop: (21.492785324s)
multinode_test.go:348: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-727000 status: exit status 7 (157.977689ms)

                                                
                                                
-- stdout --
	multinode-727000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-727000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-727000 status --alsologtostderr: exit status 7 (157.452925ms)

                                                
                                                
-- stdout --
	multinode-727000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-727000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:26:32.724246   12945 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:26:32.724499   12945 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:26:32.724504   12945 out.go:304] Setting ErrFile to fd 2...
	I0213 15:26:32.724509   12945 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:26:32.724689   12945 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18169-6320/.minikube/bin
	I0213 15:26:32.724883   12945 out.go:298] Setting JSON to false
	I0213 15:26:32.724907   12945 mustload.go:65] Loading cluster: multinode-727000
	I0213 15:26:32.724943   12945 notify.go:220] Checking for updates...
	I0213 15:26:32.725237   12945 config.go:182] Loaded profile config "multinode-727000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:26:32.725249   12945 status.go:255] checking status of multinode-727000 ...
	I0213 15:26:32.725678   12945 cli_runner.go:164] Run: docker container inspect multinode-727000 --format={{.State.Status}}
	I0213 15:26:32.776076   12945 status.go:330] multinode-727000 host status = "Stopped" (err=<nil>)
	I0213 15:26:32.776108   12945 status.go:343] host is not running, skipping remaining checks
	I0213 15:26:32.776118   12945 status.go:257] multinode-727000 status: &{Name:multinode-727000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0213 15:26:32.776154   12945 status.go:255] checking status of multinode-727000-m02 ...
	I0213 15:26:32.776410   12945 cli_runner.go:164] Run: docker container inspect multinode-727000-m02 --format={{.State.Status}}
	I0213 15:26:32.825949   12945 status.go:330] multinode-727000-m02 host status = "Stopped" (err=<nil>)
	I0213 15:26:32.825974   12945 status.go:343] host is not running, skipping remaining checks
	I0213 15:26:32.825982   12945 status.go:257] multinode-727000-m02 status: &{Name:multinode-727000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (82.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-727000 --wait=true -v=8 --alsologtostderr --driver=docker 
E0213 15:26:37.259027    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-727000 --wait=true -v=8 --alsologtostderr --driver=docker : (1m21.601120113s)
multinode_test.go:388: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-727000 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (82.50s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-727000
multinode_test.go:480: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-727000-m02 --driver=docker 
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-727000-m02 --driver=docker : exit status 14 (431.090221ms)

                                                
                                                
-- stdout --
	* [multinode-727000-m02] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18169
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18169-6320/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18169-6320/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-727000-m02' is duplicated with machine name 'multinode-727000-m02' in profile 'multinode-727000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-727000-m03 --driver=docker 
multinode_test.go:488: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-727000-m03 --driver=docker : (22.8393331s)
multinode_test.go:495: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-727000
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-727000: exit status 80 (495.13202ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-727000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-727000-m03 already exists in multinode-727000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-727000-m03
multinode_test.go:500: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-727000-m03: (2.493893268s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.32s)

                                                
                                    
x
+
TestPreload (175.37s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-332000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
E0213 15:29:17.221035    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-332000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m18.135190768s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-332000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-332000 image pull gcr.io/k8s-minikube/busybox: (5.321409058s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-332000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-332000: (10.797531328s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-332000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
E0213 15:30:14.196978    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-332000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (1m18.348792483s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-332000 image list
helpers_test.go:175: Cleaning up "test-preload-332000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-332000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-332000: (2.466049978s)
--- PASS: TestPreload (175.37s)

                                                
                                    
x
+
TestScheduledStopUnix (96.13s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-985000 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-985000 --memory=2048 --driver=docker : (21.782335955s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-985000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-985000 -n scheduled-stop-985000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-985000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-985000 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-985000 -n scheduled-stop-985000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-985000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-985000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-985000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-985000: exit status 7 (119.555218ms)

                                                
                                                
-- stdout --
	scheduled-stop-985000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-985000 -n scheduled-stop-985000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-985000 -n scheduled-stop-985000: exit status 7 (118.339319ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-985000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-985000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-985000: (2.308154985s)
--- PASS: TestScheduledStopUnix (96.13s)

                                                
                                    
x
+
TestInsufficientStorage (10.56s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-418000 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-418000 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (7.525404881s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"36aea072-557d-4e56-910e-3d570f63b9a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-418000] minikube v1.32.0 on Darwin 14.3.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"129e74f9-0495-4c83-8580-9610a3b714c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18169"}}
	{"specversion":"1.0","id":"92c33607-39cf-4441-85bb-f1405476cb2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18169-6320/kubeconfig"}}
	{"specversion":"1.0","id":"97de9dbf-c0d9-40d9-bf66-5369cb82417f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"55fe47a0-abaf-41ae-aeb1-98a6ede63044","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cf8175dc-d354-4d61-8b4b-bd2dd9a06ece","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18169-6320/.minikube"}}
	{"specversion":"1.0","id":"4778d993-c152-44b7-a96f-151960e2c85d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9846dfef-4702-46ad-81ee-a44817fd6c9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"71204dde-ce86-4e1e-872c-1675890d3814","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"1586d9c8-3a80-4f5e-b5cf-6af7250612a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ebda651c-215a-41fe-a467-df9d81931745","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"9c2a61dd-aced-4297-bcaf-ae0d80117319","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-418000 in cluster insufficient-storage-418000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"bfb61404-eddf-48e9-9525-36e2b473c3ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1704759386-17866 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"c779307c-0d01-4a53-a2c0-9fd52881264e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"b41f737f-df26-4e85-ae46-97fce1b5d8e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-418000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-418000 --output=json --layout=cluster: exit status 7 (396.804227ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-418000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-418000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 15:38:24.757795   14396 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-418000" does not appear in /Users/jenkins/minikube-integration/18169-6320/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-418000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-418000 --output=json --layout=cluster: exit status 7 (396.714464ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-418000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-418000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 15:38:25.155039   14406 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-418000" does not appear in /Users/jenkins/minikube-integration/18169-6320/kubeconfig
	E0213 15:38:25.171772   14406 status.go:559] unable to read event log: stat: stat /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/insufficient-storage-418000/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-418000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-418000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-418000: (2.244504526s)
--- PASS: TestInsufficientStorage (10.56s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (183.33s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.441984334 start -p running-upgrade-889000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:120: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.441984334 start -p running-upgrade-889000 --memory=2200 --vm-driver=docker : (2m20.928311506s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-889000 --memory=2200 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:130: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-889000 --memory=2200 --alsologtostderr -v=1 --driver=docker : (35.175885833s)
helpers_test.go:175: Cleaning up "running-upgrade-889000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-889000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-889000: (2.436273171s)
--- PASS: TestRunningBinaryUpgrade (183.33s)

                                                
                                    
x
+
TestMissingContainerUpgrade (104.31s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.3746255967 start -p missing-upgrade-355000 --memory=2200 --driver=docker 
E0213 15:44:17.232771    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
version_upgrade_test.go:309: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.3746255967 start -p missing-upgrade-355000 --memory=2200 --driver=docker : (33.556517719s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-355000
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-355000: (10.217809729s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-355000
version_upgrade_test.go:329: (dbg) Run:  out/minikube-darwin-amd64 start -p missing-upgrade-355000 --memory=2200 --alsologtostderr -v=1 --driver=docker 
E0213 15:45:14.208771    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
version_upgrade_test.go:329: (dbg) Done: out/minikube-darwin-amd64 start -p missing-upgrade-355000 --memory=2200 --alsologtostderr -v=1 --driver=docker : (53.232206081s)
helpers_test.go:175: Cleaning up "missing-upgrade-355000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-355000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-355000: (2.415245382s)
--- PASS: TestMissingContainerUpgrade (104.31s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (20.01s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=18169
- KUBECONFIG=/Users/jenkins/minikube-integration/18169-6320/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1243948940/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1243948940/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1243948940/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1243948940/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (20.01s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (22.92s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=18169
- KUBECONFIG=/Users/jenkins/minikube-integration/18169-6320/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current4232358732/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current4232358732/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current4232358732/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current4232358732/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (22.92s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (4.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (4.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (73.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.518380421 start -p stopped-upgrade-680000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:183: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.518380421 start -p stopped-upgrade-680000 --memory=2200 --vm-driver=docker : (31.843253114s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.518380421 -p stopped-upgrade-680000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.518380421 -p stopped-upgrade-680000 stop: (12.305256682s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-680000 --memory=2200 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:198: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-680000 --memory=2200 --alsologtostderr -v=1 --driver=docker : (29.464612216s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (73.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-680000
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-680000: (2.495608303s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.50s)

                                                
                                    
x
+
TestPause/serial/Start (36.73s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-219000 --memory=2048 --install-addons=false --wait=all --driver=docker 
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-219000 --memory=2048 --install-addons=false --wait=all --driver=docker : (36.732668647s)
--- PASS: TestPause/serial/Start (36.73s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (34.37s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-219000 --alsologtostderr -v=1 --driver=docker 
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-219000 --alsologtostderr -v=1 --driver=docker : (34.350958076s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (34.37s)

                                                
                                    
x
+
TestPause/serial/Pause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-219000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.69s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-219000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-219000 --output=json --layout=cluster: exit status 2 (418.595232ms)

                                                
                                                
-- stdout --
	{"Name":"pause-219000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-219000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.42s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-219000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.68s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.73s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-219000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.73s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.49s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-219000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-219000 --alsologtostderr -v=5: (2.491200973s)
--- PASS: TestPause/serial/DeletePaused (2.49s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-219000
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-219000: exit status 1 (54.597507ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-219000: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-739000 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-739000 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (560.716509ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-739000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18169
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18169-6320/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18169-6320/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (26.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-739000 --driver=docker 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-739000 --driver=docker : (26.250363396s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-739000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (26.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-739000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-739000 --no-kubernetes --driver=docker : (6.737483821s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-739000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-739000 status -o json: exit status 2 (452.669003ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-739000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-739000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-739000: (2.196233326s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-739000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-739000 --no-kubernetes --driver=docker : (7.120177546s)
--- PASS: TestNoKubernetes/serial/Start (7.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-739000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-739000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (372.718841ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-739000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-739000: (1.582813887s)
--- PASS: TestNoKubernetes/serial/Stop (1.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-739000 --driver=docker 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-739000 --driver=docker : (8.089481256s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-739000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-739000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (387.222871ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (40.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-208000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p auto-208000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker : (40.418118216s)
--- PASS: TestNetworkPlugins/group/auto/Start (40.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (51.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-208000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker 
E0213 15:49:17.241756    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-208000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker : (51.55731172s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (51.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-208000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (15.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-208000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-cd84p" [25fad23c-aa63-41a2-8f59-ed20c345fddf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-cd84p" [25fad23c-aa63-41a2-8f59-ed20c345fddf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 15.005139569s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (15.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-208000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-208000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-208000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-msvq2" [230b99b7-d669-44e0-833b-a43f61a60c23] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005676461s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-208000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-208000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-m4ddl" [fc12ac13-59d8-4011-afa2-e50bdffc4ac5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0213 15:50:14.231534    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-m4ddl" [fc12ac13-59d8-4011-afa2-e50bdffc4ac5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004718202s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (66.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-208000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p calico-208000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker : (1m6.625228553s)
--- PASS: TestNetworkPlugins/group/calico/Start (66.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-208000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-208000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-208000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (54.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-208000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-208000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker : (54.819081838s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (54.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-hx9ff" [6d6d3083-311c-4357-8daf-db3167970337] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007173336s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-208000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-208000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rzbq2" [a4203ade-8b64-4f3f-85fb-48346ab1cb0f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rzbq2" [a4203ade-8b64-4f3f-85fb-48346ab1cb0f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.003917286s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-208000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-208000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-tpt55" [ea9b30aa-7ab3-440f-ad5b-e88fd022d24c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-tpt55" [ea9b30aa-7ab3-440f-ad5b-e88fd022d24c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.004301882s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-208000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-208000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-208000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-208000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-208000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-208000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (77.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-208000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p false-208000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker : (1m17.862395852s)
--- PASS: TestNetworkPlugins/group/false/Start (77.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (39.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-208000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-208000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker : (39.290164029s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (39.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-208000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-208000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7nd5h" [d7ebc617-e53f-40e3-b77c-e8b5a2778b4e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7nd5h" [d7ebc617-e53f-40e3-b77c-e8b5a2778b4e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.004066826s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-208000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-208000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-208000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-208000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (97.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-208000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xtfpc" [03f9b12c-978d-45f4-a2db-af6c8d75c394] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-xtfpc" [03f9b12c-978d-45f4-a2db-af6c8d75c394] Running
E0213 15:55:06.341660    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kindnet-208000/client.crt: no such file or directory
E0213 15:55:06.348035    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kindnet-208000/client.crt: no such file or directory
E0213 15:55:06.358298    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kindnet-208000/client.crt: no such file or directory
E0213 15:55:06.378782    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kindnet-208000/client.crt: no such file or directory
E0213 15:55:06.418977    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kindnet-208000/client.crt: no such file or directory
E0213 15:55:06.499932    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kindnet-208000/client.crt: no such file or directory
E0213 15:55:06.660544    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kindnet-208000/client.crt: no such file or directory
E0213 15:55:06.980778    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kindnet-208000/client.crt: no such file or directory
E0213 15:55:07.622349    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kindnet-208000/client.crt: no such file or directory
E0213 15:55:08.902791    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kindnet-208000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 1m37.00329051s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (97.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (49.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-208000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker 
E0213 15:54:00.299796    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
E0213 15:54:17.248153    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-208000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker : (49.841816494s)
--- PASS: TestNetworkPlugins/group/flannel/Start (49.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-n9hp9" [eac34528-7cce-4bf2-82bc-5fea03443a4a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003871652s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-208000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (14.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-208000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dl2qn" [1a734044-87c5-478e-bea2-45263e2d0768] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0213 15:54:42.460086    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/auto-208000/client.crt: no such file or directory
E0213 15:54:42.465251    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/auto-208000/client.crt: no such file or directory
E0213 15:54:42.475534    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/auto-208000/client.crt: no such file or directory
E0213 15:54:42.497313    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/auto-208000/client.crt: no such file or directory
E0213 15:54:42.537789    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/auto-208000/client.crt: no such file or directory
E0213 15:54:42.617909    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/auto-208000/client.crt: no such file or directory
E0213 15:54:42.778508    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/auto-208000/client.crt: no such file or directory
E0213 15:54:43.098705    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/auto-208000/client.crt: no such file or directory
E0213 15:54:43.738835    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/auto-208000/client.crt: no such file or directory
E0213 15:54:45.019731    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/auto-208000/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-dl2qn" [1a734044-87c5-478e-bea2-45263e2d0768] Running
E0213 15:54:47.580200    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/auto-208000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 14.003956048s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (14.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-208000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-208000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-208000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-208000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-208000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-208000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (76.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-208000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker 
E0213 15:55:23.422398    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/auto-208000/client.crt: no such file or directory
E0213 15:55:26.823721    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kindnet-208000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-208000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker : (1m16.558117924s)
--- PASS: TestNetworkPlugins/group/bridge/Start (76.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (38.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-208000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker 
E0213 15:55:47.303744    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kindnet-208000/client.crt: no such file or directory
E0213 15:56:04.382386    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/auto-208000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-208000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker : (38.899806837s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (38.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-208000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (13.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-208000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fjcfx" [db612490-bb0b-4e59-8c3d-8979f0b7c677] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fjcfx" [db612490-bb0b-4e59-8c3d-8979f0b7c677] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 13.005155732s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (13.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-208000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-208000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-208000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-208000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (14.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-208000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7nrqb" [80177e67-8ff4-492e-9047-4085360dd9d5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0213 15:56:38.487989    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/calico-208000/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-7nrqb" [80177e67-8ff4-492e-9047-4085360dd9d5] Running
E0213 15:56:45.817116    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/custom-flannel-208000/client.crt: no such file or directory
E0213 15:56:45.823192    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/custom-flannel-208000/client.crt: no such file or directory
E0213 15:56:45.833302    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/custom-flannel-208000/client.crt: no such file or directory
E0213 15:56:45.853525    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/custom-flannel-208000/client.crt: no such file or directory
E0213 15:56:45.893659    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/custom-flannel-208000/client.crt: no such file or directory
E0213 15:56:45.973813    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/custom-flannel-208000/client.crt: no such file or directory
E0213 15:56:46.134072    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/custom-flannel-208000/client.crt: no such file or directory
E0213 15:56:46.515121    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/custom-flannel-208000/client.crt: no such file or directory
E0213 15:56:47.155776    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/custom-flannel-208000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 14.005852113s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (14.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-208000 exec deployment/netcat -- nslookup kubernetes.default
E0213 15:56:48.437128    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/custom-flannel-208000/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-208000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-208000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0213 15:56:48.728023    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/calico-208000/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (53.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-476000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.29.0-rc.2
E0213 15:57:26.300882    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/auto-208000/client.crt: no such file or directory
E0213 15:57:26.837639    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/custom-flannel-208000/client.crt: no such file or directory
E0213 15:57:50.169408    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/calico-208000/client.crt: no such file or directory
E0213 15:57:50.181691    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kindnet-208000/client.crt: no such file or directory
E0213 15:58:05.128312    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/enable-default-cni-208000/client.crt: no such file or directory
E0213 15:58:05.134118    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/enable-default-cni-208000/client.crt: no such file or directory
E0213 15:58:05.144834    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/enable-default-cni-208000/client.crt: no such file or directory
E0213 15:58:05.165914    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/enable-default-cni-208000/client.crt: no such file or directory
E0213 15:58:05.206045    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/enable-default-cni-208000/client.crt: no such file or directory
E0213 15:58:05.286505    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/enable-default-cni-208000/client.crt: no such file or directory
E0213 15:58:05.447061    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/enable-default-cni-208000/client.crt: no such file or directory
E0213 15:58:05.769152    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/enable-default-cni-208000/client.crt: no such file or directory
E0213 15:58:06.409925    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/enable-default-cni-208000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-476000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.29.0-rc.2: (53.096401558s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (53.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (12.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-476000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [17387aac-6d90-4923-b17a-ffede58da60e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0213 15:58:07.690343    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/enable-default-cni-208000/client.crt: no such file or directory
E0213 15:58:07.797021    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/custom-flannel-208000/client.crt: no such file or directory
E0213 15:58:10.250485    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/enable-default-cni-208000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [17387aac-6d90-4923-b17a-ffede58da60e] Running
E0213 15:58:15.372007    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/enable-default-cni-208000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 12.004930477s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-476000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (12.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-476000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-476000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.069721472s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-476000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-476000 --alsologtostderr -v=3
E0213 15:58:25.612412    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/enable-default-cni-208000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-476000 --alsologtostderr -v=3: (10.982659511s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-476000 -n no-preload-476000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-476000 -n no-preload-476000: exit status 7 (107.068898ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-476000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (335.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-476000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.29.0-rc.2
E0213 15:58:32.559920    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/false-208000/client.crt: no such file or directory
E0213 15:58:32.565348    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/false-208000/client.crt: no such file or directory
E0213 15:58:32.575658    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/false-208000/client.crt: no such file or directory
E0213 15:58:32.596383    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/false-208000/client.crt: no such file or directory
E0213 15:58:32.638452    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/false-208000/client.crt: no such file or directory
E0213 15:58:32.718724    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/false-208000/client.crt: no such file or directory
E0213 15:58:32.878985    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/false-208000/client.crt: no such file or directory
E0213 15:58:33.199447    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/false-208000/client.crt: no such file or directory
E0213 15:58:33.840467    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/false-208000/client.crt: no such file or directory
E0213 15:58:35.122676    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/false-208000/client.crt: no such file or directory
E0213 15:58:37.682868    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/false-208000/client.crt: no such file or directory
E0213 15:58:42.804171    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/false-208000/client.crt: no such file or directory
E0213 15:58:46.094310    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/enable-default-cni-208000/client.crt: no such file or directory
E0213 15:58:53.045848    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/false-208000/client.crt: no such file or directory
E0213 15:59:12.087852    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/calico-208000/client.crt: no such file or directory
E0213 15:59:13.525924    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/false-208000/client.crt: no such file or directory
E0213 15:59:17.241901    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
E0213 15:59:27.053689    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/enable-default-cni-208000/client.crt: no such file or directory
E0213 15:59:29.716550    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/custom-flannel-208000/client.crt: no such file or directory
E0213 15:59:31.370878    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/flannel-208000/client.crt: no such file or directory
E0213 15:59:31.376416    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/flannel-208000/client.crt: no such file or directory
E0213 15:59:31.388599    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/flannel-208000/client.crt: no such file or directory
E0213 15:59:31.409184    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/flannel-208000/client.crt: no such file or directory
E0213 15:59:31.449406    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/flannel-208000/client.crt: no such file or directory
E0213 15:59:31.529656    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/flannel-208000/client.crt: no such file or directory
E0213 15:59:31.690268    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/flannel-208000/client.crt: no such file or directory
E0213 15:59:32.010750    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/flannel-208000/client.crt: no such file or directory
E0213 15:59:32.650969    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/flannel-208000/client.crt: no such file or directory
E0213 15:59:33.931177    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/flannel-208000/client.crt: no such file or directory
E0213 15:59:36.491350    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/flannel-208000/client.crt: no such file or directory
E0213 15:59:41.611639    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/flannel-208000/client.crt: no such file or directory
E0213 15:59:42.453480    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/auto-208000/client.crt: no such file or directory
E0213 15:59:51.851852    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/flannel-208000/client.crt: no such file or directory
E0213 15:59:54.485993    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/false-208000/client.crt: no such file or directory
E0213 15:59:57.276347    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 16:00:06.334327    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kindnet-208000/client.crt: no such file or directory
E0213 16:00:10.138172    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/auto-208000/client.crt: no such file or directory
E0213 16:00:12.331605    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/flannel-208000/client.crt: no such file or directory
E0213 16:00:14.217760    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 16:00:34.020152    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kindnet-208000/client.crt: no such file or directory
E0213 16:00:48.972805    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/enable-default-cni-208000/client.crt: no such file or directory
E0213 16:00:53.291412    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/flannel-208000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-476000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.29.0-rc.2: (5m34.646022151s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-476000 -n no-preload-476000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (335.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-745000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-745000 --alsologtostderr -v=3: (1.541825673s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-745000 -n old-k8s-version-745000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-745000 -n old-k8s-version-745000: exit status 7 (107.97103ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-745000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8wbwd" [c73da977-d07f-4e8d-bb74-04ecfac91deb] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8wbwd" [c73da977-d07f-4e8d-bb74-04ecfac91deb] Running
E0213 16:04:17.235202    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
E0213 16:04:18.120769    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/bridge-208000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.005310692s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8wbwd" [c73da977-d07f-4e8d-bb74-04ecfac91deb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004408775s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-476000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-476000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-476000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-476000 -n no-preload-476000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-476000 -n no-preload-476000: exit status 2 (429.964984ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-476000 -n no-preload-476000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-476000 -n no-preload-476000: exit status 2 (427.135919ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-476000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-476000 -n no-preload-476000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-476000 -n no-preload-476000
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (75.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-743000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.28.4
E0213 16:04:31.364620    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/flannel-208000/client.crt: no such file or directory
E0213 16:04:42.446802    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/auto-208000/client.crt: no such file or directory
E0213 16:04:59.048265    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/flannel-208000/client.crt: no such file or directory
E0213 16:05:06.327707    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kindnet-208000/client.crt: no such file or directory
E0213 16:05:14.211290    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-743000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.28.4: (1m15.324080128s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (75.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-743000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8b2a6427-f082-45de-b82e-420287b9c450] Pending
helpers_test.go:344: "busybox" [8b2a6427-f082-45de-b82e-420287b9c450] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8b2a6427-f082-45de-b82e-420287b9c450] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 12.003901916s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-743000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-743000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-743000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.155077111s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-743000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-743000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-743000 --alsologtostderr -v=3: (10.947325518s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-743000 -n embed-certs-743000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-743000 -n embed-certs-743000: exit status 7 (108.592499ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-743000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (314.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-743000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.28.4
E0213 16:06:14.092407    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubenet-208000/client.crt: no such file or directory
E0213 16:06:28.232524    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/calico-208000/client.crt: no such file or directory
E0213 16:06:34.274606    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/bridge-208000/client.crt: no such file or directory
E0213 16:06:41.777782    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubenet-208000/client.crt: no such file or directory
E0213 16:06:45.804145    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/custom-flannel-208000/client.crt: no such file or directory
E0213 16:07:01.957412    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/bridge-208000/client.crt: no such file or directory
E0213 16:08:05.114567    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/enable-default-cni-208000/client.crt: no such file or directory
E0213 16:08:07.417837    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/no-preload-476000/client.crt: no such file or directory
E0213 16:08:07.423145    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/no-preload-476000/client.crt: no such file or directory
E0213 16:08:07.434454    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/no-preload-476000/client.crt: no such file or directory
E0213 16:08:07.454912    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/no-preload-476000/client.crt: no such file or directory
E0213 16:08:07.496608    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/no-preload-476000/client.crt: no such file or directory
E0213 16:08:07.577056    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/no-preload-476000/client.crt: no such file or directory
E0213 16:08:07.737281    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/no-preload-476000/client.crt: no such file or directory
E0213 16:08:08.057802    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/no-preload-476000/client.crt: no such file or directory
E0213 16:08:08.698173    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/no-preload-476000/client.crt: no such file or directory
E0213 16:08:09.979198    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/no-preload-476000/client.crt: no such file or directory
E0213 16:08:12.539923    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/no-preload-476000/client.crt: no such file or directory
E0213 16:08:17.660080    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/no-preload-476000/client.crt: no such file or directory
E0213 16:08:27.900489    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/no-preload-476000/client.crt: no such file or directory
E0213 16:08:32.546598    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/false-208000/client.crt: no such file or directory
E0213 16:08:48.380175    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/no-preload-476000/client.crt: no such file or directory
E0213 16:09:17.228848    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
E0213 16:09:29.341586    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/no-preload-476000/client.crt: no such file or directory
E0213 16:09:31.356204    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/flannel-208000/client.crt: no such file or directory
E0213 16:09:42.440957    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/auto-208000/client.crt: no such file or directory
E0213 16:10:06.321344    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kindnet-208000/client.crt: no such file or directory
E0213 16:10:14.204659    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 16:10:40.279689    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
E0213 16:10:51.260776    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/no-preload-476000/client.crt: no such file or directory
E0213 16:11:05.485499    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/auto-208000/client.crt: no such file or directory
E0213 16:11:14.085665    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kubenet-208000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-743000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.28.4: (5m14.147344225s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-743000 -n embed-certs-743000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (314.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-k6px6" [530e96a3-874f-4e13-97bd-9f6d4ac5d087] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0213 16:11:28.225912    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/calico-208000/client.crt: no such file or directory
E0213 16:11:29.366193    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/kindnet-208000/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-k6px6" [530e96a3-874f-4e13-97bd-9f6d4ac5d087] Running
E0213 16:11:34.266065    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/bridge-208000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.034842354s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-k6px6" [530e96a3-874f-4e13-97bd-9f6d4ac5d087] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004731097s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-743000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-743000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-743000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-743000 -n embed-certs-743000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-743000 -n embed-certs-743000: exit status 2 (424.931261ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-743000 -n embed-certs-743000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-743000 -n embed-certs-743000: exit status 2 (425.639423ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-743000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-743000 -n embed-certs-743000
E0213 16:11:45.798452    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/custom-flannel-208000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-743000 -n embed-certs-743000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (38.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-788000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-788000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.28.4: (38.429423224s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (38.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (13.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-788000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9b6c99a9-a968-4a12-ac03-1edfb8191415] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9b6c99a9-a968-4a12-ac03-1edfb8191415] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 13.003782692s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-788000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (13.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-788000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-788000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.158875075s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-788000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-788000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-788000 --alsologtostderr -v=3: (10.884805385s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-788000 -n default-k8s-diff-port-788000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-788000 -n default-k8s-diff-port-788000: exit status 7 (109.741551ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-788000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (337.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-788000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-788000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.28.4: (5m36.963146197s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-788000 -n default-k8s-diff-port-788000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (337.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-bspfz" [4576cf4c-a61e-477d-b54b-6e81daaf4e44] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0213 16:18:32.714888    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/false-208000/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-bspfz" [4576cf4c-a61e-477d-b54b-6e81daaf4e44] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.004477234s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-bspfz" [4576cf4c-a61e-477d-b54b-6e81daaf4e44] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004987511s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-788000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-diff-port-788000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-788000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-788000 -n default-k8s-diff-port-788000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-788000 -n default-k8s-diff-port-788000: exit status 2 (431.942106ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-788000 -n default-k8s-diff-port-788000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-788000 -n default-k8s-diff-port-788000: exit status 2 (443.134846ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-788000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-788000 -n default-k8s-diff-port-788000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-788000 -n default-k8s-diff-port-788000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (34.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-926000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-926000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.29.0-rc.2: (34.817770765s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (34.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-926000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0213 16:19:31.527090    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/flannel-208000/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-926000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.226358262s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (9.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-926000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-926000 --alsologtostderr -v=3: (9.234135738s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (9.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-926000 -n newest-cni-926000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-926000 -n newest-cni-926000: exit status 7 (109.344024ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-926000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (29.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-926000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.29.0-rc.2
E0213 16:19:42.611469    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/auto-208000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-926000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.29.0-rc.2: (29.170494995s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-926000 -n newest-cni-926000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (29.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-926000 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-926000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-926000 -n newest-cni-926000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-926000 -n newest-cni-926000: exit status 2 (432.549667ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-926000 -n newest-cni-926000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-926000 -n newest-cni-926000: exit status 2 (447.25382ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-926000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-926000 -n newest-cni-926000
E0213 16:20:14.377784    6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-926000 -n newest-cni-926000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.34s)

                                                
                                    

Test skip (21/333)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (20.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 15.48274ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-l4dbd" [f8d65bfc-252a-40d8-9ca6-859bd5201617] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004916374s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-dzvns" [7f3b0fac-28fb-490d-abab-9dc5762a6f4b] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004488352s
addons_test.go:340: (dbg) Run:  kubectl --context addons-441000 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-441000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-441000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (10.133017065s)
addons_test.go:355: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (20.22s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (11.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-441000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-441000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-441000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [db8281a1-1b73-4d1a-8a8e-855809731355] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [db8281a1-1b73-4d1a-8a8e-855809731355] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.005194686s
addons_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 -p addons-441000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:282: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (11.82s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (15.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-443000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-443000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-x2sj8" [41638b85-1d6f-40c0-8c88-efc8109a2240] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-x2sj8" [41638b85-1d6f-40c0-8c88-efc8109a2240] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 15.004766728s
functional_test.go:1642: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (15.15s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-208000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-208000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-208000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-208000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-208000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-208000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-208000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-208000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-208000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-208000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-208000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-208000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-208000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-208000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-208000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-208000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-208000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-208000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-208000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-208000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-208000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-208000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-208000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-208000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-208000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-208000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-208000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-208000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-208000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-208000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-208000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-208000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-208000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-208000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-208000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-208000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-208000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-208000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-208000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-208000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-208000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-208000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-208000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-208000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-208000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-208000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-208000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-208000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-208000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-208000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-208000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-208000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-208000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-208000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-208000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-208000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-208000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-208000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-208000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-208000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-208000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-208000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-208000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-208000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-208000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-208000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-208000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-208000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-208000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-208000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-208000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-208000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-208000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-208000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-208000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-208000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-208000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-208000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-208000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-208000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-208000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-208000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-208000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-208000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-208000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-208000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-208000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-208000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-208000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-208000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-208000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-208000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-208000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-208000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-208000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-208000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-208000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-208000"

                                                
                                                
----------------------- debugLogs end: cilium-208000 [took: 6.209972766s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-208000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-208000
--- SKIP: TestNetworkPlugins/group/cilium (6.70s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-253000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-253000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.39s)

                                                
                                    
Copied to clipboard